These AWS projects take you from beginner to expert, giving hands-on experience with core cloud services. You will get to know how to host websites, automate workflows, process data in real-time, deploy serverless applications, and handle infrastructure. All while gaining practical skills with actual AWS tools.
What is AWS?
Amazon Web Service (AWS) is one of the most widely used cloud platforms. AWS delivers cloud services to developers and companies, allowing them to remain agile. AWS is used by a wide range of businesses, from multibillion-dollar start-ups to government institutions. If you wish to work in cloud computing, you should learn about Amazon Web Services (AWS). AWS offers a wide range of services to its customers.
Learn AWS Cloud fundamentals and gain hands-on experience in deploying, managing, and scaling applications on AWS.
Beginner AWS Projects
1. Serverless Personal Portfolio Website
Project details:
Hosting a portfolio site made of HTML, CSS, and JavaScript on the internet in a serverless way. There is no need to manage the service, and the cost is also based only on usage. The site will be hosted on S3, content will be served quickly from CloudFront, an SSL certificate will be provided so that the site runs on HTTPS, and a custom domain can be connected to Route 53.
Steps:
- Create a bucket in Amazon S3 and start static website hosting
- Upload website files like index.html, CSS, and JS.
- Allow public read access in the bucket policy so that the site can be opened on a browser.
- Get a free SSL certificate from AWS Certificate Manager
- Create a CloudFront distribution and set the S3 bucket in the origin, and configure the custom domain and certificate
- Create Origin Access Identity (OAI) and update bucket policy to block direct S3 access
- Connect the domain to CloudFront distribution by creating an A record in Amazon Route 53
AWS services used: S3, CloudFront, Certificate Manager, Route 53
2. Automated Contact Form with Email Notification
Project details:
Create a backend for a contact form on a website in a serverless way. When the form is submitted, the Lambda function gets triggered, picks up data, and sends an email through SES. In this way, an automated email notification system can be created without a server.
Steps:
- Verify email address or full domain in Amazon SES
- Write a Lambda function in Python or NodeJS that processes form data (name, email, message) and sends mail by calling SES’s SendEmail API
- Create a REST API in Amazon API Gateway and add a POST method to a resource like contact
- Connect the POST method to the Lambda function
- Enable CORS in API Gateway and deploy the API, and get the public URL
- Edit website’s form and send data as a JSON payload to the API Gateway’s URL
AWS services used: Lambda, API Gateway, SES
3. On-the-Fly Image Resizing Service
Project details:
Creating a pipeline to automatically resize the image as soon as it is uploaded. Lambda will be triggered as soon as the image arrives in the source bucket and will create a thumbnail or other resized versions and save them in the processed bucket. This pattern is very useful in applications where the user uploads a profile picture or a product image and different sizes are required.
Steps:
- Create two Amazon S3 buckets — one to upload the original images and one to store the resized images
- Develop a Lambda function using an image processing library like sharp for Node.js
- Keep the logic of the function as follows:
- Read the source bucket and object key from the event
- Download the source image from S3
- Give the image to the resizing library and create a thumbnail, e.g. 200×200 pixels
- Upload the thumbnail to the processed S3 bucket
- Configure event notification on the source bucket so that the Lambda function is triggered when an object is uploaded
- Give IAM permissions to the Lambda execution role so that it can getObject from the source bucket and putObject to the destination bucket
AWS services used: Amazon S3, AWS Lambda, AWS IAM
Intermediate AWS Projects
4. Full-Stack Serverless To-Do List Application
Project details:
Build a To-Do list application with React or Vue frontend and a fully serverless backend. This will be a database to store tasks and user authentication will be handled by Cognito. Users can login and create, update and delete their personal tasks.
Steps:
- Create an Amazon DynamoDB table with a composite primary key: userId partition key and taskId sort key, such that data for each user is stored separately
- Create an Amazon Cognito user pool to service registration, login and JWT token management
- Develop Lambda functions for various API operations like createTask, getTasksForUser, updateTask and deleteTask
- Make a REST API in Amazon API Gateway and configure endpoints such as /tasks and /tasks/{taskId} with POST, GET, PUT and DELETE methods. Authorize the methods using the Cognito user pool authorizer so that only requests containing a valid JWT token are permitted
- Develop a Frontend single-page application (SPA). Handling Cognito authentication flow and signed API requests using AWS Amplify library
- Hosting the frontend as static build output on Amazon S3 and serving from CloudFront distribution
AWS services used: Amazon DynamoDB, AWS Lambda, Amazon API Gateway, Amazon S3, Amazon Cognito
5. CI/CD Pipeline for a Web Application
Project details:
Automate the whole process by taking a web application from a Git repository, compiling it, testing it, and deploying it to a hosting environment. Manual steps are reduced and code can automatically reach production on every new commit.
Steps:
- Configure AWS CodePipeline to create a Source stage and link to GitHub, Bitbucket, or AWS CodeCommit repository. Should be triggered upon commit to a branch, e.g., main.
- Add a Build stage and use AWS CodeBuild in it. Keep a buildspec.yml file in the repository that contains commands to install dependencies, run tests, and create a production build. Also, define which files will go into the build artifacts
- Add a Deploy stage. The deployment target depends on the architecture
- If it is a static site, then sync the build artifacts to an S3 bucket
- If deploying to EC2 or ECS, then update the new version on servers or containers using CodeDeploy action
- Configure IAM roles so that CodeBuild and CodeDeploy can access S3, ECS or other resources
AWS services used: CodePipeline, CodeBuild, CodeDeploy, CodeCommit
6. Deploy a Containerized Microservice with ECS and Fargate
Project details:
Pack the application in a Docker container and deploy it to AWS’s serverless container orchestration platform. In this approach, virtual machines do not have to be managed and the container workload runs directly on Fargate.
Steps:
- Create a Dockerfile in the root directory of the application that defines the base OS, dependency installation, code copying and application running steps
- Create a private repository in Amazon Elastic Container Registry (ECR) to store the Docker image
- Build a Docker image on the local machine, tag it with the ECR repository URI and then push the image
- Define an ECS task definition with the ECR image, CPU, and memory needs, port mapping like mapping container port 8080 to the host, and log setup to send logs to CloudWatch
- Define an ECS cluster with the AWS Fargate launch type and define an ECS service within that cluster so that the desired count (e.g. 2 instances) runs at all times
- Set up the Application Load Balancer and attach the target group to the ECS service such that traffic is routed across containers
AWS services used: ECS, Fargate, ECR, Application Load Balancer
Expert AWS Projects
7. Real-time Data Streaming and Analytics Pipeline
Project details:
Creating a system where a large amount of data is continuously coming in and can be processed and stored immediately. For example, clickstream data from a website or telemetry data from IoT devices. The job of this pipeline will be to ingest the data, process it, and send it to storage so that further analysis can be done.
Steps:
- Creating an Amazon Kinesis Data Stream, which will be the entry point for ingesting data. Deciding the number of shards according to data throughput
- Creating a data producer, i.e. script or application that sends records to the Kinesis stream. Using the PutRecord or PutRecords API call of the AWS SDK
- Creating consumer logic, writing an AWS Lambda function to which Kinesis will send a batch of records
- Decoding the records (received in base64 format) inside the Lambda function, then applying the required transformation, aggregation, or filtering
- Storing processed data further, such as in Amazon DynamoDB for fast lookups or in Amazon S3 for bulk storage
- Connecting Kinesis and Lambda, Lambda will be automatically invoked when new data arrives in the stream from the event source mapping
AWS services used: Kinesis Data Streams, Lambda, DynamoDB
8. Interactive Querying of a Serverless Data Lake
Project details:
Creating a data lake on Amazon S3 and running SQL queries on it using serverless tools. There is no need to manage a separate data warehouse. Data is stored and a catalog is created and then queries are run from Athena. QuickSight can be used for visualization.
Steps:
- Storing data in Amazon S3, using a columnar format like Apache Parquet for better performance and adopting a logical partitioning structure like sales/year=2025/month=09
- Creating an AWS Glue Crawler and running it on the root directory of the dataset. The crawler will detect the schema, identify partitions, and create metadata tables in the Glue Data Catalog
- Opening the Amazon Athena console, selecting the database created by Glue and running standard SQL queries. Athena queries the data directly on S3
- Athena saves the query results to S3. Creating dashboards and reports by connecting Amazon QuickSight to Athena for visualization
AWS services used: S3, Glue, Athena, QuickSight
9. Deploying a Machine Learning Model on Amazon SageMaker
Project Details:
Creating a full end-to-end machine learning pipeline that involves data preparation, model training, and deploying the trained model to a real-time API endpoint. This enables the model to be invoked from any application for real-time prediction.
Steps:
- Upload the training data into an Amazon S3 bucket and spin up an Amazon SageMaker Notebook instance (managed Jupyter environment)
- Perform model training using the SageMaker Python SDK in Notebook
- Choose one of SageMaker’s built-in algorithms or import your custom algorithm into a Docker container
- Create a SageMaker Estimator instance specifying the algorithm, training compute instance type, and input-output paths to S3
- Call .fit() on the Estimator to initiate the training job and provision the SageMaker infrastructure for running training and storing the model artifacts in S3
- Call .deploy() after training is complete to host the trained model on a real-time HTTPS endpoint. Instance type and count can be specified
- Invoke SageMaker endpoint using AWS SDK from any application, send request payload and receive prediction response
AWS services used: Amazon SageMaker, Amazon S3
10. Build a Multi-Tier Web Application using Infrastructure as Code
Project Details:
Create a complete cloud-based multi-tier web application environment using an Infrastructure as Code approach. This keeps provisioning and management consistent and repeatable. Networking, database, application servers, and load balancer can be deployed simultaneously using a CloudFormation template.
Steps:
- Write CloudFormation template in YAML or JSON
- Define the Network layer, which includes VPC, public and private subnets, internet gateway, and route tables. Use multiple Availability Zones for high availability
- Define the Data layer, which includes Amazon RDS instances placed in private subnets. Create a security group for the database that allows traffic only from the web tier security group
- Define an application layer with Auto Scaling setup. Provide EC2 instance configuration (AMI, instance type, user data scripts) in the launch template and deploy Auto Scaling Group to public subnets
- Define a presentation layer that declares an Elastic Load Balancer and a Target Group. The target group should be connected to the Auto Scaling group and configure a listener on the load balancer that will route internet traffic to the target group
- Deploy the stack from AWS CLI or console. All resources will be created in the correct order based on the CloudFormation template. Modify the template and update the stack to update the infrastructure
AWS services used: AWS CloudFormation, Amazon VPC, Amazon EC2, Amazon RDS, Application Load Balancer
Also Read: