Compare commits

...

10 Commits

Author SHA1 Message Date
1ae1d8397d Update terraform to use instance profile
All checks were successful
Gitea Actions Demo / Explore-Gitea-Actions (push) Successful in 52s
2025-06-22 12:11:10 -07:00
3353aeb26b Add wake command for ec2 2025-06-22 09:45:51 -07:00
6055a94285 Fix jpg 2025-06-16 21:58:12 -07:00
d10c38e2a2 Fix pdf 2025-06-16 18:54:46 -07:00
944c17f9ad Finish up lab3 2025-06-16 18:54:45 -07:00
e7297ff195 Finish up lab3 2025-06-16 18:50:02 -07:00
d1e7574cf9 Finish lab 3 2025-06-16 18:48:28 -07:00
134b13f5cc update notes 2025-06-13 21:25:56 -07:00
44fa26ba30 Add resume pdf & html 2025-06-13 21:25:37 -07:00
989107517a Add resume pdf & html 2025-06-12 15:32:08 -07:00
24 changed files with 670 additions and 137 deletions

View File

@ -1,5 +1,6 @@
{ MFA_IDENTIFIER="ARN",
"MFA_IDENTIFIER": "ARN", S3_ROLE="ARN",
"S3_ROLE": "ARN", SESSION_TYPE=""
"SESSION_TYPE": "" AWS_DEFAULT_REGION="us-east-2"
} BW_AWS_ACCOUNT_SECRET_ID=""
BW_SESSION=""

4
.gitignore vendored
View File

@ -1,6 +1,8 @@
.envrc .envrc
terraform/
private* private*
todo.md
.terraform*
*.tfstate*
# Byte-compiled Python files # Byte-compiled Python files
__pycache__/ __pycache__/

View File

@ -1,111 +0,0 @@
# Lab 3
## Prep
- [x] Gitea set up
- [x] MFA set up
- [x] Add git ignore
- [x] Secrets/Token Management
- [x] Consider secret-scanning
- [x] Added git-leaks on pre-commit hook
- [x] Create & Connect to a Git*** repository
- [x] https://git.dropbear-minnow.ts.net/
- [x] Modify and make a second commit
![image of terminal](./assets/prep-console.png)
- [x] Test to see if gitea actions works
- [x] Have an existing s3 bucket
## Resources
- [x] [Capital One Data Breach](./assets/Capital%20One%20Data%20Breach%20—%202019.%20Introduction%20_%20by%20Tanner%20Jones%20_%20Nerd%20For%20Tech%20_%20Medium.pdf)
- [x] [Grant IAM User Access to Only One S3 Bucket](./assets/Grant%20IAM%20User%20Access%20to%20Only%20One%20S3%20Bucket%20_%20Medium.pdf)
- [ ] [IAM Bucket Policies](./assets/From%20IAM%20to%20Bucket%20Policies_%20A%20Comprehensive%20Guide%20to%20S3%20Access%20Control%20with%20Console,%20CLI,%20and%20Terraform%20_%20by%20Mohasina%20Clt%20_%20Medium.pdf)
- [ ] [Dumping S3 Buckets!](https://www.youtube.com/watch?v=ITSZ8743MUk)
## Lab
- [x] create a custom IAM Policy
- [x] create an IAM Role for EC2
![trust relationships](./assets/trust-relationships.jpg)
![permissions](./assets/permissions.jpg)
- [x] Attach the Role to your EC2 Instance
- [x] Verify is3 access from the EC2 Instance
* HTTPS outbound was not set up
* I did not check outbound rules (even when the lab explicitly called this out)
because it mentioned lab 2, so my assumption was that it had already been set up
(it was not). When connection to s3 failed I double checked lab 3 instructions
![screenshot of listing s3 contents](./assets/s3-access-screenshot.jpg)
### Stretch
- [x] Create a bucket policy that blocks all public access but allows your IAM role
- [ ] Implmented: [guide](https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/)
![restrict to role](./assets/restrict-to-role.jpg)
- [ ] **Experiment** with requiring MFA or VPC conditions.
- [ ] MFA conditions
* MFA did not work out of the box after setting it in the s3 bucket policy.
The ways I found you can configure MFA:
* [stackoverflow](https://stackoverflow.com/questions/34795780/how-to-use-mfa-with-aws-cli)
* [official guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html)
* via cli roles
* configuration via ~/.aws/credentials
* 1Password CLI with AWS Plugin
* I use bitwarden, which also has an AWS Plugin
* This is probably what I will gravitate towards for a more
long-term setup, because having all of these credentials
floating around in various areas on my computer/virtualbox
envs gets confusing. Not a fan.
* I've seen a lot more recommendations (TBH it's more like 2 vs 0)
for 1password for password credential setup. Wonder why?
* other apps that handle this
- [ ] VPC
- [ ] **Host a static site**
- [ ] Enable a static website hosting (`index.html`)
- [ ] Configure route 53 alias or CNAME for `resume.<yourdomain>` to the bucket endpoint.
- [ ] Deploy CloudFront with ACM certificate for HTTPS
#### Private "Innvite-Only" Resume Hosting
1. **Pre-signed URLs**
`aws s3 presign s3://<YOUR_BUCKET_NAME>/resume.pdf --expires-in 3600`
2. **IAM-only access**
- [ ] Store under `private/`
- [ ] Write a bucket policy allowing only the role `EC2-S3-Access-Role-daphodell` to `GetObject`
3. **Restrict to IP address**
- [ ] copy pasta json into bucket policy
### Further Exploration
1. [ ] Snapshots & AMIs
- [ ] Create an EBS snapshot of `/dev/xvda`
- [ ] Register/create an AMI from that snapshot
- [ ] How do you "version" a server with snapshots? Why is this useful?
- [ ] Launch a new instance from your AMI
2. [ ] Linux & Security Tooling
3. [ ] Scripting & Automation
- [ ] Bash: report world-writable files
- [ ] Python with boto3: list snapshots, start/stop instances
## Further Reading
- [ ]
- [ ]
- [ ]
## Reflection
* What I built
* Challenges
* Security concerns
On scale and security at scale
## Terms
### Identity Access Management
```mermaid
graph LR
IAMPolicy -- attaches to --> IAMIdentity
ExplainIAMIdentity[users, groups of users, roles, AWS resources]:::aside
ExplainIAMIdentity -.-> IAMIdentity
classDef aside stroke-dasharray: 5 5, stroke-width:2px;
```
## End lab
- [ ] On June 20, 2025, do the following:
- [ ] Clean up
- [ ] Custom roles
- [ ] Custom policies
- [ ] Stop ec2 Instance
- [ ] Remove s3 bucket

View File

@ -1 +1,239 @@
# My First Repo # Lab 3
## Prep
- [x] Gitea set up
- [x] MFA set up
- [x] Add git ignore
- [x] Secrets/Token Management
- [x] Consider secret-scanning
- [x] Added git-leaks on pre-commit hook
- [x] Create & Connect to a Git repository
- [x] https://code.wizards.cafe
- [x] Modify and make a second commit
![image of terminal](./assets/prep-console.png)
- [x] Test to see if gitea actions works
- [x] Have an existing s3 bucket
## Resources
- [x] [Capital One Data Breach](./assets/Capital%20One%20Data%20Breach%20—%202019.%20Introduction%20_%20by%20Tanner%20Jones%20_%20Nerd%20For%20Tech%20_%20Medium.pdf)
- [x] [Grant IAM User Access to Only One S3 Bucket](./assets/Grant%20IAM%20User%20Access%20to%20Only%20One%20S3%20Bucket%20_%20Medium.pdf)
- [ ] [IAM Bucket Policies](./assets/From%20IAM%20to%20Bucket%20Policies_%20A%20Comprehensive%20Guide%20to%20S3%20Access%20Control%20with%20Console,%20CLI,%20and%20Terraform%20_%20by%20Mohasina%20Clt%20_%20Medium.pdf)
- [ ] [Dumping S3 Buckets!](https://www.youtube.com/watch?v=ITSZ8743MUk)
## Lab
### Tasks
- [x] **4.1. Create a Custom IAM Policy**
![trust relationships](./assets/trust-relationships.jpg)
- [x] **4.2 Create an IAM Role for EC2**
![permissions](./assets/permissions.jpg)
- [x] **4.3. Attach the Role to your EC2 Instance **
- [x] **4.4 Verify is3 access from the EC2 Instance**
* HTTPS outbound was not set up
* I did not check outbound rules (even when the lab explicitly called this out)
because it mentioned lab 2, so my assumption was that it had already been set up
(it was not). So I had to go back and fix the missing outbound connection.
![screenshot of listing s3 contents](./assets/s3-access-screenshot.png)
### Stretch Goals
- [x] **Deep Dive into Bucket Policies**
- [x] **1. Create a bucket policy** that blocks all public access but allows your IAM role
- Implmented using: [guide](https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/)
![restrict to role](./assets/restrict-to-role.png)
- [x] **2. Experiment** with requiring MFA or VPC conditions.
- [x] MFA conditions
* MFA did not work out of the box after setting it in the s3 bucket policy.
The ways I found you can configure MFA:
* [stackoverflow](https://stackoverflow.com/questions/34795780/how-to-use-mfa-with-aws-cli)
* [official guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html)
* [x] via cli roles - I set up a new set of role-trust relationships.
* Update s3 Role:
* Update action: sts:assumerole
* Update principle (Since you cannot not target groups, I tried setting it first for
my user, and then to the root account for scalability)
* Add condition (MFA bool must be true)
* Commands referenced: I set up a script that looks like this
```bash
MFA_TOKEN=$1
if [ -z "$1" ]; then
echo "Error: Run with MFA token!"
exit 1
fi
if [ -z $BW_AWS_ACCOUNT_SECRET_ID ]; then
echo "env var BW_AWS_ACCOUNT_SECRET_ID must be set!"
exit 1
fi
AWS_SECRETS=$(bw get item $BW_AWS_ACCOUNT_SECRET_ID)
export AWS_ACCESS_KEY_ID=$(echo "$AWS_SECRETS" | jq -r '.fields[0].value')
export AWS_SECRET_ACCESS_KEY=$(echo "$AWS_SECRETS" | jq '.fields[1].value' | tr -d '"')
SESSION_OUTPUT=$(aws sts assume-role --role-arn $S3_ROLE --role-session-name $SESSION_TYPE --serial-number $MFA_IDENTIFIER --token-code $MFA_TOKEN)
#echo $SESSION_OUTPUT
export AWS_SESSION_TOKEN=$(echo "$SESSION_OUTPUT" | jq '.Credentials.SessionToken' | tr -d '"')
export AWS_ACCESS_KEY_ID=$(echo "$SESSION_OUTPUT" | jq '.Credentials.AccessKeyId' | tr -d '"')
export AWS_SECRET_ACCESS_KEY=$(echo "$SESSION_OUTPUT" | jq '.Credentials.SecretAccessKey' | tr -d '"')
#echo $AWS_SESSION_TOKEN
#echo $AWS_ACCESS_KEY_ID
#echo $AWS_SECRET_ACCESS_KEY
aws s3 ls s3://witch-lab-3
```
* configuration via ~/.aws/credentials
* 1Password CLI with AWS Plugin
* I use bitwarden, which also has an AWS Plugin
* I've seen a lot more recommendations (TBH it's more like 2 vs 0)
for 1password for password credential setup. Wonder why?
- [x] **3. Host a static site**
- [x] Enable a static website hosting (`index.html`)
- [x] Configure route 53 alias or CNAME for `resume.<yourdomain>` to the bucket endpoint.
- [x] Deploy CloudFront with ACM certificate for HTTPS
* see: [resume](https://resume.wizards.cafe)
- [x] **b. Pre-signed URLs**
(see: presigned url screenshot)
`aws s3 presign s3://<YOUR_BUCKET_NAME>/resume.pdf --expires-in 3600`
* Cloudflare Edge Certificate -> Cloudfront -> S3 Bucket
* In this step, I disabled "static website hosting" on the s3 bucket
* This was actually maddening to set up. For reasons I can't understand even
after Google Searching and ChatGPTing, my s3 bucket is under us-east-2
and Cloudfront kept redirecting me to the us-east-1 for some reason. I don't like
switching up regions under AWS because this way it's easy to forget what region
you created a specific service in because they're hidden depending on what
region is active at the moment.
![presigned url](./assets/create-presigned-url.jpg)
- [x] Import resources into terraform
Ran some commands:
```sh
terraform import aws_iam_policy.assume_role_s3_policy $TF_VAR_ASSUME_ROLE_POLICY
...
terraform plan
terraform apply
```
### Further Exploration
- [x] Snapshots & AMIs
- [x] Create an EBS snapshot of `/dev/xvda`
- [x] Register/create an AMI from that snapshot
* I think with terraform we can combine the two steps and use
`aws_ami_from_instance` to get the end result
(create an AMI from snapshot) for free. Otherwise I think
you would want to do [aws_ebs_snapshot](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ebs_snapshot) and then [aws_ami](https://registry.terraform.io/providers/hashicorp/aws/5.99.1/docs/resources/ami)
```sh
resource "aws_ami_from_instance" "ami_snapshot" {
name = "ami-snapshot-${formatdate("YYYY-MM-DD", timestamp())}"
source_instance_id = aws_instance.my_first_linux.id
snapshot_without_reboot = true
tags = {
Name = "labs"
}
}
```
![ami creation](./assets/ami-creation.jpg)
- [x] How do you "version" a server with snapshots? Why is this useful?
* **Cattle, not pets**
This is useful for following the concept for treating your servers as
"cattle, not pets". Being able to keep versioned snapshots of your machines
means there's nothing special about your currently running server.
If it goes down (or you need to shoot it down), you can restore it on
another machine from an older snapshot.
Or, as another example, suppose you were tasked with installing a whole suite of tools
on an ec2 instance (ex: fail2ban, ClamAV, etc.). Then your boss tells
you the company needs the same setup for another 49 instances.
Having 1 AMI that contains all of those tools can save you from
re-running the same commands 50x times.
- [x] Launch a new instance from your AMI
Terraform is great. Terraform is life. This was really simple to do.
```sh
# Launch new instance from AMI
resource "aws_instance" "my_second_linux" {
instance_type = "t2.micro"
ami = aws_ami_from_instance.ami_snapshot.arn
security_groups = ["ssh-access-witch"]
iam_instance_profile = aws_iam_instance_profile.daphodell_profile.name
tags = {
Name = "labs"
}
}
```
- [x] Convert to terraform
* Terraform files can be found [here](./terraform/main.tf)
## Reflection
* What I built
* A secured s3 bucket for secure content that can only be accessed via multi-factor authentication
Good for storing particularly sensitive information.
* A minimal HTML website served from an S3 bucket
* Challenges
* The stretch goal for setting up s3 + mfa was a bit of a pain:
* Groups cannot be used as the principal in a trust relationship so to get things
working I added the trust relationship to my user's ARN instead.
I prodded ChatGPT on a more practical way to do this (this wouldn't scale with 100s of users, onboarding/offboarding etc.) and had to go back and fix how the policies worked.
* Issues between setting up Cloudflare -> CloudFront -> s3 bucket
* I think adding an extra service (Cloudflare, where I host my domain) added a little bit of complexity, though
my main issue was figuring out how to set up the ACM cert -> CloudFront distribution -> S3.
Most of the instructions I was able to parse through with ChatGPT -- I have to say I had a much
better reading through those instructions than with the official AWS docs, which led me through
nested links (understandably, because there seem to be multiple ways of doing everything).
* Security concerns
Scale and security at scale
I started out this lab doing "click-ops", and I noticed while testing connections
that there was just a lot of trial and error in setting up permissions.
My process seemed to be: OK, this seems pretty straightforward, let's just add the policy.
But after adding the policy it looked like there was a cascade of errors where I
forgot to add additional permissions or trust relationships that weren't immediately
obvious until I actually went through the error logs one by one.
Once everything got set up via click-ops and imported to terraform though, repeating
the same steps via Terraform was *very easy*.
I think putting everything down into code really helps to self-document
the steps it takes to get a fully functioning setup.
## Terms
### Identity Access Management
```mermaid
graph LR
IAMPolicy -- attaches to --> IAMIdentity
ExplainIAMIdentity[users, groups of users, roles, AWS resources]:::aside
ExplainIAMIdentity -.-> IAMIdentity
classDef aside stroke-dasharray: 5 5, stroke-width:2px;
```
![Identity Access Management](./assets/mermaid.jpg)

BIN
lab-3/README.pdf Normal file

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 164 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 123 KiB

BIN
lab-3/assets/mermaid.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

4
lab-4/README.md Normal file
View File

@ -0,0 +1,4 @@
# Lab 4
## End Lab
- [ ] Turn off ec2

22
lab-resume/LAB-REPORT.md Normal file
View File

@ -0,0 +1,22 @@
# [daphodell]
- [git](https://code.wizards.cafe)
### Projects
- Placeholder
## Skills
- Cloud Platforms & Security: AWS (IAM)
- Infrastructure as Code: Terraform, CloudFormation
- Programming and Scripting: JavaScript, Bash, Go
- DevOps & Automation: CI/CD Pipelines (GitHub Actions, Gitlab, Gitea Actions), Docker
## Professional Experience
### Senior Software Engineer
[Company A], 2025
- Automated data pipeline to ingest and transform data from unstructured municipal websites
- Leveraged LLMs to parse and normalize data
- Improved update frequency and data accuracy via integration and baseline testing.
### Senior Software Engineer
[Company B], 2022-2024
- Placeholder text

BIN
lab-resume/LAB-REPORT.pdf Normal file

Binary file not shown.

157
lab-resume/index.html Normal file
View File

@ -0,0 +1,157 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>daphodell Resume</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
/* LaTeX-like styling */
body {
font-family: 'Computer Modern', 'Latin Modern Roman', 'Times New Roman', Times, serif;
background: #f9f9f9;
color: #222;
margin: 0;
padding: 0;
}
.container {
max-width: 800px;
margin: 40px auto;
background: #fff;
box-shadow: 0 2px 8px rgba(0,0,0,0.08);
padding: 40px 48px;
border-radius: 8px;
position: relative;
}
h1, h2, h3 {
font-family: 'Latin Modern Roman', 'Computer Modern', 'Times New Roman', Times, serif;
font-weight: normal;
margin-top: 1.5em;
margin-bottom: 0.5em;
}
h1 {
font-size: 2.2em;
border-bottom: 2px solid #222;
padding-bottom: 0.2em;
margin-bottom: 1em;
letter-spacing: 0.02em;
}
h2 {
font-size: 1.3em;
color: #333;
margin-top: 1.5em;
border-bottom: 1px solid #bbb;
padding-bottom: 0.1em;
}
h3 {
font-size: 1.1em;
color: #444;
margin-top: 1em;
margin-bottom: 0.2em;
}
ul {
margin-top: 0.2em;
margin-bottom: 1em;
padding-left: 1.2em;
}
li {
margin-bottom: 0.3em;
}
.section {
margin-bottom: 2em;
}
.job-title {
font-weight: bold;
font-size: 1.05em;
}
.job-company {
font-style: italic;
color: #555;
margin-left: 0.5em;
}
.job-dates {
float: right;
color: #888;
font-size: 0.95em;
}
.header-links {
position: absolute;
right: 48px;
top: 48px;
font-size: 0.92em;
opacity: 0.7;
}
.header-links a {
color: #888;
text-decoration: none;
margin-left: 0;
font-size: 1em;
padding: 0;
background: none;
border: none;
border-radius: 0;
transition: color 0.2s;
}
.header-links a:hover {
color: #0072b1;
background: none;
border: none;
}
@media (max-width: 600px) {
.container { padding: 16px 6vw; }
.job-dates { float: none; display: block; margin-top: 0.2em; }
.header-links {
position: static;
display: block;
margin-top: 0.5em;
right: auto;
top: auto;
text-align: right;
}
}
</style>
</head>
<body>
<div class="container">
<h1 style="margin-bottom:0.5em;">
daphodell
</h1>
<span class="header-links">
<a href="https://code.wizards.cafe" title="Git profile" target="_blank" rel="noopener">
git
</a>
</span>
<div class="section">
<h2>Skills</h2>
<ul>
<li>Cloud Platforms &amp; Security: AWS (IAM)</li>
<li>Infrastructure as Code: Terraform, CloudFormation</li>
<li>Programming and Scripting: JavaScript, Bash, Go</li>
<li>DevOps &amp; Automation: CI/CD Pipelines (GitHub Actions, Gitlab, Gitea Actions), Docker</li>
</ul>
</div>
<div class="section">
<h2>Professional Experience</h2>
<div>
<span class="job-title">Senior Software Engineer</span>
<span class="job-company">[Company A]</span>
<span class="job-dates">2025</span>
<ul>
<li>Automated data pipeline to ingest and transform data from unstructured municipal websites</li>
<li>Leveraged LLMs to parse and normalize data</li>
<li>Improved update frequency and data accuracy via integration and baseline testing.</li>
</ul>
</div>
<div>
<span class="job-title">Senior Software Engineer</span>
<span class="job-company">[Company B]</span>
<span class="job-dates">2022-2024</span>
<ul>
<li>Placeholder text</li>
</ul>
</div>
</div>
</div>
</body>
</html>

View File

@ -1,22 +1,32 @@
[tools] [tools]
aws-cli = 'latest' aws-cli = 'latest'
bitwarden = 'latest' pandoc = 'latest'
jq = 'latest' terraform = 'latest'
bw = 'latest'
# Also required:
# - bitwarden CLI
# - jq
[env] [env]
_.file = '.env.json' _.file = '.env'
BLAH = "{{ exec(command=\"bw get item $BW_AWS_ACCOUNT_SECRET_ID\") }}"
[tasks.ssh] [tasks.ssh]
run = "ssh -p 5679 vboxuser@127.0.0.1" run = "ssh -p 5679 vboxuser@127.0.0.1"
[tasks.generate] [tasks.labgen]
run = "./utilities/pdf_make/labs.sh" run = "./utilities/pdf_make/labs.sh"
[tasks.setup-aws] [tasks.poke-s3]
run = """ run = """
export SECRETS_OBJECT=$(bw get item $BW_AWS_ACCOUNT_SECRET_ID) ./utilities/setup_aws/use-s3.sh
export AWS_ACCESS_KEY_ID=$(echo "$SECRETS_OBJECT" | jq -r '.fields[0].value') """
export AWS_SECRET_ACCESS_KEY=$(echo "$SECRETS_OBJECT" | jq '.fields[1].value')
[tasks.sleep]
run = """
aws ec2 stop-instances --instance-ids $TF_VAR_EC2_INSTANCE_ID --region us-east-2
"""
[tasks.wake]
run = """
aws ec2 start-instances --instance-ids $TF_VAR_EC2_INSTANCE_ID --region us-east-2
""" """

View File

@ -5,3 +5,5 @@
## Read ## Read
- [ ] [Debugging Zine](https://jvns.ca/debugging-zine.pdf) - [ ] [Debugging Zine](https://jvns.ca/debugging-zine.pdf)
- [ ] [The 5 Cybersecurity roles that will disappear first](./assets/The%205%20Cybersecurity%20Roles%20That%20Will%20Disappear%20First%20_%20by%20Taimur%20Ijlal%20_%20Jun,%202025%20_%20Medium-1.pdf) - [ ] [The 5 Cybersecurity roles that will disappear first](./assets/The%205%20Cybersecurity%20Roles%20That%20Will%20Disappear%20First%20_%20by%20Taimur%20Ijlal%20_%20Jun,%202025%20_%20Medium-1.pdf)
- [ ] Cloud Security For Beginners
- [ ] Sandworm

141
terraform/main.tf Normal file
View File

@ -0,0 +1,141 @@
resource "aws_instance" "my_first_linux" {
instance_type = "t2.micro"
ami = "ami-06971c49acd687c30"
security_groups = ["ssh-access-witch"]
iam_instance_profile = aws_iam_instance_profile.daphodell_profile.name
tags = {
Name = "labs"
}
}
resource "aws_s3_bucket" "resume-bucket" {
tags = {
Name = "labs"
}
}
resource "aws_s3_bucket" "lab-bucket" {
tags = {
Name = "labs"
}
}
## Security Group
resource "aws_security_group" "ec2_security_group" {
description = "Restrict SSH to my IP"
}
resource "aws_security_group" "default" {
description = "default VPC security group"
}
## Roles
resource "aws_iam_role" "daphodell_role" {
assume_role_policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
{
"Effect": "Allow",
"Principal": {
"AWS": var.ACCOUNT_ROOT_ARN
},
"Action": "sts:AssumeRole",
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "true"
}
}
}
]})
tags = {
Name = "labs"
}
}
resource "aws_iam_instance_profile" "daphodell_profile" {
name = "daphodell_profile"
role = aws_iam_role.daphodell_role.name
}
## Policies
## Allow user CLI -> S3 read/write
resource "aws_iam_policy" "assume_role_s3_policy" {
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": aws_iam_role.daphodell_role.arn
}
]
})
tags = {
Name = "labs"
}
}
## Allow ec2 -> s3 read/write
resource "aws_iam_policy" "ec2_s3_policy" {
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::witch-lab-3"
},
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::witch-lab-3/*"
}
]
})
tags = {
Name = "labs"
}
}
# # Create AMI
# resource "aws_ami_from_instance" "ami_snapshot" {
# name = "ami-snapshot-2025-06-17"
# source_instance_id = aws_instance.my_first_linux.id
# snapshot_without_reboot = true
#
# tags = {
# Name = "labs"
# }
# }
#
# # Launch new instance from AMI
# resource "aws_instance" "my_second_linux" {
# instance_type = "t2.micro"
# ami = aws_ami_from_instance.ami_snapshot.id
# security_groups = ["ssh-access-witch"]
#
# tags = {
# Name = "labs"
# }
# }

20
terraform/providers.tf Normal file
View File

@ -0,0 +1,20 @@
provider "aws" {
region = "us-east-2"
}
variable "EC2_INSTANCE_ID" {
description = "ID of the EC2 instance"
type = string
}
variable "ASSUME_ROLE_POLICY" {
type = string
}
variable "EC2_POLICY" {
type = string
}
variable "ACCOUNT_ROOT_ARN" {
type = string
}

View File

@ -1,5 +1,5 @@
# pdf_make/Dockerfile # Use the official Pandoc image as base
FROM pandoc/latex:2.19 FROM pandoc/latex:latest
WORKDIR /app WORKDIR /app

View File

@ -9,19 +9,19 @@ cd /labs || { echo "Error: /labs directory not found in container. Exiting."; ex
# Find all directories prefixed with "lab-" # Find all directories prefixed with "lab-"
find . -maxdepth 1 -type d -name "lab-*" | while read lab_dir; do find . -maxdepth 1 -type d -name "lab-*" | while read lab_dir; do
echo "Processing directory: $lab_dir" echo "Processing directory: $lab_dir"
markdown_file="$lab_dir/LAB-REPORT.md" markdown_file="$lab_dir/README.md"
pdf_file="$lab_dir/LAB-REPORT.pdf" pdf_file="$lab_dir/README.pdf"
# Check if LAB-REPORT.md exists # Check if LAB-REPORT.md exists
if [ -f "$markdown_file" ]; then if [ -f "$markdown_file" ]; then
echo "Found $markdown_file" echo "Found $markdown_file"
# Check if LAB-REPORT.pdf does not exist # Check if LAB-REPORT.pdf does not exist
if [ ! -f "$pdf_file" ]; then if [ ! -f "$pdf_file" ]; then
echo "LAB-REPORT.pdf not found. Generating PDF from markdown..." echo "README.pdf not found. Generating PDF from markdown..."
# Generate PDF using pandoc # Generate PDF using pandoc
# Make sure 'pandoc' command is available in the image, which it is for pandoc/latex # Make sure 'pandoc' command is available in the image, which it is for pandoc/latex
image_dir="$lab_dir" image_dir="$lab_dir"
pandoc "$markdown_file" -s -o "$pdf_file" --pdf-engine=pdflatex --resource-path "$image_dir" pandoc "$markdown_file" -s -o "$pdf_file" --pdf-engine=xelatex --resource-path "$image_dir" -V geometry:margin=0.5in
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
echo "Successfully generated $pdf_file" echo "Successfully generated $pdf_file"
@ -29,10 +29,10 @@ find . -maxdepth 1 -type d -name "lab-*" | while read lab_dir; do
echo "Error generating $pdf_file" echo "Error generating $pdf_file"
fi fi
else else
echo "LAB-REPORT.pdf already exists. Skipping generation." echo "README.pdf already exists. Skipping generation."
fi fi
else else
echo "LAB-REPORT.md not found in $lab_dir. Skipping." echo "README.md not found in $lab_dir. Skipping."
fi fi
done done

View File

@ -0,0 +1,11 @@
#!/bin/bash
if [ -z $BW_AWS_ACCOUNT_SECRET_ID ]; then
echo "env var BW_AWS_ACCOUNT_SECRET_ID must be set!"
exit 1
fi
AWS_SECRETS=$(bw get item $BW_AWS_ACCOUNT_SECRET_ID)
export AWS_ACCESS_KEY_ID=$(echo "$AWS_SECRETS" | jq -r '.fields[0].value')
export AWS_SECRET_ACCESS_KEY=$(echo "$AWS_SECRETS" | jq '.fields[1].value' | tr -d '"')

36
utilities/setup_aws/use-s3.sh Executable file
View File

@ -0,0 +1,36 @@
#!/bin/bash
MFA_TOKEN=$1
# Capture everything from second argument onward as a command
shift
COMMAND=("$@")
if [ -z "MFA_TOKEN" ]; then
echo "Error: Run with MFA token!"
exit 1
fi
if [ -z $BW_AWS_ACCOUNT_SECRET_ID ]; then
echo "env var BW_AWS_ACCOUNT_SECRET_ID must be set!"
exit 1
fi
AWS_SECRETS=$(bw get item $BW_AWS_ACCOUNT_SECRET_ID)
export AWS_ACCESS_KEY_ID=$(echo "$AWS_SECRETS" | jq -r '.fields[0].value')
export AWS_SECRET_ACCESS_KEY=$(echo "$AWS_SECRETS" | jq '.fields[1].value' | tr -d '"')
SESSION_OUTPUT=$(aws sts assume-role --role-arn $S3_ROLE --role-session-name $SESSION_TYPE --serial-number $MFA_IDENTIFIER --token-code $MFA_TOKEN)
#echo $SESSION_OUTPUT
export AWS_SESSION_TOKEN=$(echo "$SESSION_OUTPUT" | jq '.Credentials.SessionToken' | tr -d '"')
export AWS_ACCESS_KEY_ID=$(echo "$SESSION_OUTPUT" | jq '.Credentials.AccessKeyId' | tr -d '"')
export AWS_SECRET_ACCESS_KEY=$(echo "$SESSION_OUTPUT" | jq '.Credentials.SecretAccessKey' | tr -d '"')
#echo $AWS_SESSION_TOKEN
#echo $AWS_ACCESS_KEY_ID
#echo $AWS_SECRET_ACCESS_KEY
if command -v "$COMMAND" >/dev/null 2>&1; then
"${COMMAND[@]}"
else
aws s3 ls s3://witch-lab-3
fi