238 lines
9.6 KiB
Markdown
238 lines
9.6 KiB
Markdown
# Lab 3
|
|
|
|
## Prep
|
|
- [x] Gitea set up
|
|
- [x] MFA set up
|
|
- [x] Add git ignore
|
|
- [x] Secrets/Token Management
|
|
- [x] Consider secret-scanning
|
|
- [x] Added git-leaks on pre-commit hook
|
|
- [x] Create & Connect to a Git repository
|
|
- [x] https://code.wizards.cafe
|
|
- [x] Modify and make a second commit
|
|
|
|

|
|
|
|
- [x] Test to see if gitea actions works
|
|
- [x] Have an existing s3 bucket
|
|
|
|
## Resources
|
|
- [x] [Capital One Data Breach](./assets/Capital%20One%20Data%20Breach%20—%202019.%20Introduction%20_%20by%20Tanner%20Jones%20_%20Nerd%20For%20Tech%20_%20Medium.pdf)
|
|
- [x] [Grant IAM User Access to Only One S3 Bucket](./assets/Grant%20IAM%20User%20Access%20to%20Only%20One%20S3%20Bucket%20_%20Medium.pdf)
|
|
- [ ] [IAM Bucket Policies](./assets/From%20IAM%20to%20Bucket%20Policies_%20A%20Comprehensive%20Guide%20to%20S3%20Access%20Control%20with%20Console,%20CLI,%20and%20Terraform%20_%20by%20Mohasina%20Clt%20_%20Medium.pdf)
|
|
- [ ] [Dumping S3 Buckets!](https://www.youtube.com/watch?v=ITSZ8743MUk)
|
|
|
|
## Lab
|
|
|
|
### Tasks
|
|
|
|
- [x] **4.1. Create a Custom IAM Policy**
|
|
|
|

|
|
|
|
- [x] **4.2 Create an IAM Role for EC2**
|
|
|
|

|
|
|
|
- [x] **4.3. Attach the Role to your EC2 Instance **
|
|
- [x] **4.4 Verify is3 access from the EC2 Instance**
|
|
* HTTPS outbound was not set up
|
|
* I did not check outbound rules (even when the lab explicitly called this out)
|
|
because it mentioned lab 2, so my assumption was that it had already been set up
|
|
(it was not). So I had to go back and fix the missing outbound connection.
|
|
|
|

|
|
|
|
### Stretch Goals
|
|
- [x] **Deep Dive into Bucket Policies**
|
|
- [x] **1. Create a bucket policy** that blocks all public access but allows your IAM role
|
|
- Implmented using: [guide](https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/)
|
|
|
|

|
|
|
|
- [x] **2. Experiment** with requiring MFA or VPC conditions.
|
|
- [x] MFA conditions
|
|
* MFA did not work out of the box after setting it in the s3 bucket policy.
|
|
The ways I found you can configure MFA:
|
|
* [stackoverflow](https://stackoverflow.com/questions/34795780/how-to-use-mfa-with-aws-cli)
|
|
* [official guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html)
|
|
* [x] via cli roles - I set up a new set of role-trust relationships.
|
|
* Update s3 Role:
|
|
* Update action: sts:assumerole
|
|
* Update principle (Since you cannot not target groups, I tried setting it first for
|
|
my user, and then to the root account for scalability)
|
|
* Add condition (MFA bool must be true)
|
|
* Commands referenced: I set up a script that looks like this
|
|
|
|
```bash
|
|
MFA_TOKEN=$1
|
|
|
|
if [ -z "$1" ]; then
|
|
echo "Error: Run with MFA token!"
|
|
exit 1
|
|
fi
|
|
|
|
if [ -z $BW_AWS_ACCOUNT_SECRET_ID ]; then
|
|
echo "env var BW_AWS_ACCOUNT_SECRET_ID must be set!"
|
|
exit 1
|
|
fi
|
|
|
|
AWS_SECRETS=$(bw get item $BW_AWS_ACCOUNT_SECRET_ID)
|
|
|
|
export AWS_ACCESS_KEY_ID=$(echo "$AWS_SECRETS" | jq -r '.fields[0].value')
|
|
export AWS_SECRET_ACCESS_KEY=$(echo "$AWS_SECRETS" | jq '.fields[1].value' | tr -d '"')
|
|
|
|
SESSION_OUTPUT=$(aws sts assume-role --role-arn $S3_ROLE --role-session-name $SESSION_TYPE --serial-number $MFA_IDENTIFIER --token-code $MFA_TOKEN)
|
|
#echo $SESSION_OUTPUT
|
|
export AWS_SESSION_TOKEN=$(echo "$SESSION_OUTPUT" | jq '.Credentials.SessionToken' | tr -d '"')
|
|
export AWS_ACCESS_KEY_ID=$(echo "$SESSION_OUTPUT" | jq '.Credentials.AccessKeyId' | tr -d '"')
|
|
export AWS_SECRET_ACCESS_KEY=$(echo "$SESSION_OUTPUT" | jq '.Credentials.SecretAccessKey' | tr -d '"')
|
|
#echo $AWS_SESSION_TOKEN
|
|
#echo $AWS_ACCESS_KEY_ID
|
|
#echo $AWS_SECRET_ACCESS_KEY
|
|
aws s3 ls s3://witch-lab-3
|
|
```
|
|
|
|
* configuration via ~/.aws/credentials
|
|
* 1Password CLI with AWS Plugin
|
|
* I use bitwarden, which also has an AWS Plugin
|
|
* I've seen a lot more recommendations (TBH it's more like 2 vs 0)
|
|
for 1password for password credential setup. Wonder why?
|
|
|
|
- [x] **3. Host a static site**
|
|
- [x] Enable a static website hosting (`index.html`)
|
|
- [x] Configure route 53 alias or CNAME for `resume.<yourdomain>` to the bucket endpoint.
|
|
- [x] Deploy CloudFront with ACM certificate for HTTPS
|
|
* see: [resume](https://resume.wizards.cafe)
|
|
- [x] **b. Pre-signed URLs**
|
|
(see: presigned url screenshot)
|
|
`aws s3 presign s3://<YOUR_BUCKET_NAME>/resume.pdf --expires-in 3600`
|
|
* Cloudflare Edge Certificate -> Cloudfront -> S3 Bucket
|
|
* In this step, I disabled "static website hosting" on the s3 bucket
|
|
* This was actually maddening to set up. For reasons I can't understand even
|
|
after Google Searching and ChatGPTing, my s3 bucket is under us-east-2
|
|
and Cloudfront kept redirecting me to the us-east-1 for some reason. I don't like
|
|
switching up regions under AWS because this way it's easy to forget what region
|
|
you created a specific service in because they're hidden depending on what
|
|
region is active at the moment.
|
|
|
|

|
|
|
|
- [x] Import resources into terraform
|
|
Ran some commands:
|
|
```sh
|
|
terraform import aws_iam_policy.assume_role_s3_policy $TF_VAR_ASSUME_ROLE_POLICY
|
|
...
|
|
terraform plan
|
|
terraform apply
|
|
```
|
|
|
|
### Further Exploration
|
|
- [x] Snapshots & AMIs
|
|
- [x] Create an EBS snapshot of `/dev/xvda`
|
|
- [x] Register/create an AMI from that snapshot
|
|
* I think with terraform we can combine the two steps and use
|
|
`aws_ami_from_instance` to get the end result
|
|
(create an AMI from snapshot) for free. Otherwise I think
|
|
you would want to do [aws_ebs_snapshot](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ebs_snapshot) and then [aws_ami](https://registry.terraform.io/providers/hashicorp/aws/5.99.1/docs/resources/ami)
|
|
|
|
```sh
|
|
resource "aws_ami_from_instance" "ami_snapshot" {
|
|
name = "ami-snapshot-${formatdate("YYYY-MM-DD", timestamp())}"
|
|
source_instance_id = aws_instance.my_first_linux.id
|
|
snapshot_without_reboot = true
|
|
|
|
tags = {
|
|
Name = "labs"
|
|
}
|
|
}
|
|
```
|
|
|
|

|
|
|
|
- [x] How do you "version" a server with snapshots? Why is this useful?
|
|
|
|
* **Cattle, not pets**
|
|
|
|
This is useful for following the concept for treating your servers as
|
|
"cattle, not pets". Being able to keep versioned snapshots of your machines
|
|
means there's nothing special about your currently running server.
|
|
If it goes down (or you need to shoot it down), you can restore it on
|
|
another machine from an older snapshot.
|
|
|
|
Or, as another example, suppose you were tasked with installing a whole suite of tools
|
|
on an ec2 instance (ex: fail2ban, ClamAV, etc.). Then your boss tells
|
|
you the company needs the same setup for another 49 instances.
|
|
|
|
Having 1 AMI that contains all of those tools can save you from
|
|
re-running the same commands 50x times.
|
|
|
|
- [x] Launch a new instance from your AMI
|
|
|
|
Terraform is great. Terraform is life. This was really simple to do.
|
|
|
|
```sh
|
|
# Launch new instance from AMI
|
|
resource "aws_instance" "my_second_linux" {
|
|
instance_type = "t2.micro"
|
|
ami = aws_ami_from_instance.ami_snapshot.arn
|
|
security_groups = ["ssh-access-witch"]
|
|
|
|
tags = {
|
|
Name = "labs"
|
|
}
|
|
}
|
|
```
|
|
|
|
- [x] Convert to terraform
|
|
* Terraform files can be found [here](./terraform/main.tf)
|
|
|
|
## Reflection
|
|
* What I built
|
|
* A secured s3 bucket for secure content that can only be accessed via multi-factor authentication
|
|
Good for storing particularly sensitive information.
|
|
* A minimal HTML website served from an S3 bucket
|
|
* Challenges
|
|
* The stretch goal for setting up s3 + mfa was a bit of a pain:
|
|
* Groups cannot be used as the principal in a trust relationship so to get things
|
|
working I added the trust relationship to my user's ARN instead.
|
|
I prodded ChatGPT on a more practical way to do this (this wouldn't scale with 100s of users, onboarding/offboarding etc.) and had to go back and fix how the policies worked.
|
|
* Issues between setting up Cloudflare -> CloudFront -> s3 bucket
|
|
* I think adding an extra service (Cloudflare, where I host my domain) added a little bit of complexity, though
|
|
my main issue was figuring out how to set up the ACM cert -> CloudFront distribution -> S3.
|
|
Most of the instructions I was able to parse through with ChatGPT -- I have to say I had a much
|
|
better reading through those instructions than with the official AWS docs, which led me through
|
|
nested links (understandably, because there seem to be multiple ways of doing everything).
|
|
|
|
* Security concerns
|
|
|
|
Scale and security at scale
|
|
|
|
I started out this lab doing "click-ops", and I noticed while testing connections
|
|
that there was just a lot of trial and error in setting up permissions.
|
|
|
|
My process seemed to be: OK, this seems pretty straightforward, let's just add the policy.
|
|
|
|
But after adding the policy it looked like there was a cascade of errors where I
|
|
forgot to add additional permissions or trust relationships that weren't immediately
|
|
obvious until I actually went through the error logs one by one.
|
|
|
|
Once everything got set up via click-ops and imported to terraform though, repeating
|
|
the same steps via Terraform was *very easy*.
|
|
|
|
I think putting everything down into code really helps to self-document
|
|
the steps it takes to get a fully functioning setup.
|
|
|
|
|
|
## Terms
|
|
### Identity Access Management
|
|
|
|
```mermaid
|
|
graph LR
|
|
IAMPolicy -- attaches to --> IAMIdentity
|
|
ExplainIAMIdentity[users, groups of users, roles, AWS resources]:::aside
|
|
ExplainIAMIdentity -.-> IAMIdentity
|
|
classDef aside stroke-dasharray: 5 5, stroke-width:2px;
|
|
```
|
|
|
|
 |