# Lab 3 ## Prep - [x] Gitea set up - [x] MFA set up - [x] Add git ignore - [x] Secrets/Token Management - [x] Consider secret-scanning - [x] Added git-leaks on pre-commit hook - [x] Create & Connect to a Git repository - [x] https://code.wizards.cafe - [x] Modify and make a second commit ![image of terminal](./assets/prep-console.png) - [x] Test to see if gitea actions works - [x] Have an existing s3 bucket ## Resources - [x] [Capital One Data Breach](./assets/Capital%20One%20Data%20Breach%20—%202019.%20Introduction%20_%20by%20Tanner%20Jones%20_%20Nerd%20For%20Tech%20_%20Medium.pdf) - [x] [Grant IAM User Access to Only One S3 Bucket](./assets/Grant%20IAM%20User%20Access%20to%20Only%20One%20S3%20Bucket%20_%20Medium.pdf) - [ ] [IAM Bucket Policies](./assets/From%20IAM%20to%20Bucket%20Policies_%20A%20Comprehensive%20Guide%20to%20S3%20Access%20Control%20with%20Console,%20CLI,%20and%20Terraform%20_%20by%20Mohasina%20Clt%20_%20Medium.pdf) - [ ] [Dumping S3 Buckets!](https://www.youtube.com/watch?v=ITSZ8743MUk) ## Lab - [x] create a custom IAM Policy - [x] create an IAM Role for EC2 ![trust relationships](./assets/trust-relationships.jpg) ![permissions](./assets/permissions.jpg) - [x] Attach the Role to your EC2 Instance - [x] Verify is3 access from the EC2 Instance * HTTPS outbound was not set up * I did not check outbound rules (even when the lab explicitly called this out) because it mentioned lab 2, so my assumption was that it had already been set up (it was not). When connection to s3 failed I double checked lab 3 instructions ![screenshot of listing s3 contents](./assets/s3-access-screenshot.jpg) ### Stretch - [x] Create a bucket policy that blocks all public access but allows your IAM role - [x] Implmented: [guide](https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/) <<<<<<< HEAD ![restrict to role](./assets/restrict-to-role.jpg) ======= ![restrict to role](./assets/restrict-to-role.png) >>>>>>> 1437cee (Add resume pdf & html) - [x] **Experiment** with requiring MFA or VPC conditions. - [x] MFA conditions * MFA did not work out of the box after setting it in the s3 bucket policy. The ways I found you can configure MFA: * [stackoverflow](https://stackoverflow.com/questions/34795780/how-to-use-mfa-with-aws-cli) * [official guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html) * [x] via cli roles - I set up a new set of role-trust relationships. * Update s3 Role: * Update action: sts:assumerole * Update principle (for user -- could not target group) * Add condition (MFA bool must be true) * Commands referenced: I set up a script that looks like this ```bash MFA_TOKEN=$1 if [ -z "$1" ]; then echo "Error: Run with MFA token!" exit 1 fi if [ -z $BW_AWS_ACCOUNT_SECRET_ID ]; then echo "env var BW_AWS_ACCOUNT_SECRET_ID must be set!" exit 1 fi AWS_SECRETS=$(bw get item $BW_AWS_ACCOUNT_SECRET_ID) export AWS_ACCESS_KEY_ID=$(echo "$AWS_SECRETS" | jq -r '.fields[0].value') export AWS_SECRET_ACCESS_KEY=$(echo "$AWS_SECRETS" | jq '.fields[1].value' | tr -d '"') SESSION_OUTPUT=$(aws sts assume-role --role-arn $S3_ROLE --role-session-name $SESSION_TYPE --serial-number $MFA_IDENTIFIER --token-code $MFA_TOKEN) #echo $SESSION_OUTPUT export AWS_SESSION_TOKEN=$(echo "$SESSION_OUTPUT" | jq '.Credentials.SessionToken' | tr -d '"') export AWS_ACCESS_KEY_ID=$(echo "$SESSION_OUTPUT" | jq '.Credentials.AccessKeyId' | tr -d '"') export AWS_SECRET_ACCESS_KEY=$(echo "$SESSION_OUTPUT" | jq '.Credentials.SecretAccessKey' | tr -d '"') #echo $AWS_SESSION_TOKEN #echo $AWS_ACCESS_KEY_ID #echo $AWS_SECRET_ACCESS_KEY aws s3 ls s3://witch-lab-3 ``` * configuration via ~/.aws/credentials * 1Password CLI with AWS Plugin * I use bitwarden, which also has an AWS Plugin * I've seen a lot more recommendations (TBH it's more like 2 vs 0) for 1password for password credential setup. Wonder why? - [x] **Host a static site** - [x] Enable a static website hosting (`index.html`) - [x] Configure route 53 alias or CNAME for `resume.` to the bucket endpoint. - [x] Deploy CloudFront with ACM certificate for HTTPS * see: [resume](https://resume.wizards.cafe) <<<<<<< HEAD - [ ] **Private "Invite-Only" Resume Hosting** 1. [ ] **Pre-signed URLs** `aws s3 presign s3:///resume.pdf --expires-in 3600` ======= * Cloudflare Edge Certificate -> Cloudfront -> S3 Bucket * In this step, I disabled "static website hosting" on the s3 bucket * This was actually maddening to set up. For reasons I can't understand even after Google Searching and ChatGPTing, my s3 bucket is under us-east-2 and Cloudfront kept redirecting me to the us-east-1 for some reason. I don't like switching up regions under AWS because this way it's easy to forget what region you created a specific service in because they're hidden depending on what region is active at the moment. **Private "Invite-Only" Resume Hosting** - [x] **Pre-signed URLs** `aws s3 presign s3:///resume.pdf --expires-in 3600` (see: presigned url screenshot) ![presigned url](./assets/create-presigned-url.jpg) >>>>>>> 1437cee (Add resume pdf & html) ### Further Exploration - [ ] Snapshots & AMIs - [ ] Create an EBS snapshot of `/dev/xvda` - [ ] Register/create an AMI from that snapshot - [x] How do you "version" a server with snapshots? Why is this useful? **Cattle, not pets** This is useful for following the concept for treating your servers as "cattle, not pets". Being able to keep versioned snapshots of your machines means there's nothing special about your currently running server. If it goes down (or you need to shoot it down), you can restore it on another machine from an older snapshot. Or if you needed to suddenly scale your operation from 1 machine to many, where each machine needed the exact same configuration set as the other (all need fail2ban installed, etc. etc,) -- you can do that with an AMI image. - [ ] Launch a new instance from your AMI - [ ] Linux & Security Tooling - [ ] `ss -tulpn`, `lsof`, `auditctl` to inspect services and audit - [ ] Install & run: - [ ] nmap localhost - [ ] tcpdump - c 20 -ni eth0 - [ ] lynis audit system - [ ] fail2ban-client status - [ ] OSSEC/Wazuh or ClamAV - [ ] Scripting & Automation - [ ] Bash: report world-writable files - [ ] Python with boto3: list snapshots, start/stop instances - [ ] Convert to terraform - [ ] IAM Role - [ ] IAM Policy - [ ] IAM Group - [ ] EC2 Instance - [ ] S3 Bucket ## Further Reading - [ ] - [ ] - [ ] ## Reflection * What I built * A secured s3 bucket for secure content that can only be accessed via multi-factor authentication Good for storing particularly sensitive information. * A minimal HTML website served from an S3 bucket * Challenges * The stretch goal for setting up s3 + mfa was a bit of a pain: * Groups cannot be used as the principal in a trust relationship, breaking my mental model of the ideal way to onboard/offboard engineers by simply removing them from groups (although I may have set up the IAM permissions in an inefficient way. I ended up having to assign a user as the principal of the trust relationship for my s3 role.) * Issues between setting up Cloudflare -> CloudFront -> s3 bucket * I think adding an extra service (Cloudflare, where I host my domain) added a little bit of complexity, though my main issue was figuring out how to set up the ACM cert -> CloudFront distribution -> S3. Most of the instructions I was able to parse through with ChatGPT -- I have to say I had a much better reading through those instructions than with the official AWS docs, which led me through nested links (understandably, because there seem to be multiple ways of doing everything). * Security concerns On scale and security at scale ## Terms ### Identity Access Management ```mermaid graph LR IAMPolicy -- attaches to --> IAMIdentity ExplainIAMIdentity[users, groups of users, roles, AWS resources]:::aside ExplainIAMIdentity -.-> IAMIdentity classDef aside stroke-dasharray: 5 5, stroke-width:2px; ``` ![Identity Access Management](./assets/mermaid.jpg)