Though the title of this section in the book is Using Azure Storage, you probably could have guessed that we'll be using the AWS equivilant, S3! As the book says, you can easily use an S3 bucket instead of Azure storage for file storage. You won't need to create a separate storage account like you do for Azure Storage but you will need to create an S3 bucket. Your S3 bucket will need to have a globally unique name, I recommend something like "bootstrapping-microservices-<random string>". There are 2 ways to do this, through the console or through the CLI.
To create a bucket with the console, just navigate to S3 and click "Create bucket". Setting permissions can be tricky, but luckily we can use all of the default settings since we'll only be accessing this file as our IAM user. Then, in the bucket, click the upload button to upload the file through the console. I recommend renaming the file to sample_video.mp4after your upload
To use the CLI, just use the command
aws s3 mb s3://<bucket-name> --region us-east-2
Don't forget to update the region to be whichever region you've been using
And to upload the video just navigate to example-1/videos in the book's companion git repo for this chapter and run the command
aws s3 cp SampleVideo_1280x720_1mb.mp4 s3://<bucket-name>/sample_video.mp4
Notice how when I upload it to the bucket I added /sample_video.mp4 at the end of the s3 path. This will rename the file from SampleVideo_1280x720_1mb.mp4 to sample_video.mp4. This will make our lives a little easier later on.
So far it's been easy, but here comes the hard part. The book even says that it won't be a simple task to convert the code from using Azure storage to AWS. But I got you covered!
As mentioned above, the book comes with a companion GitHub repo. I've forked the repo and updated it with all the code you'll need for AWS. You can check out my GitHub repo and stop here if you want to just download the code updated for use with AWS and keep following along with the book. Keep reading if you want some explanations or if you want to update the code from the original repo yourself!
First, I created a separate directory called aws-storage. This will keep our AWS code separate from our Azure code. The code is pretty similar to azure-storage so you can just copy the azure-storage folder and rename it aws-storage and start from there.
We'll need to do some housekeeping before we jump into the code. The book has us export our environment variables in the console before running our microservices which is safer than keeping them in an .env file. Just as the book says, we'll need to export the port number
export PORT=3000
Unlike the book, we won't need to export a storage account name or access key. Instead, we'll export our AWS credentials as well as the name of the bucket we just created above by putting the code below into the terminal
export AWS_ACCESS_KEY_ID=<YOUR_ACCESS_KEY>
export AWS_SECRET_ACCESS_KEY=<YOUR_SECRET_ACCESS_KEY>
export AWS_BUCKET=<YOUR_BUCKET_NAME>
If you don't have an AWS access key, check out the start of my first blog entry where I explain how to generate IAM access keys. Your AWS_BUCKET name should be the name of the bucket you just created.
Next we'll need to add the AWS SDK to our code. First run
npm install @aws-sdk/client-s3
To install the s3 client npm package. Then, in src/index.js of aws-storage, delete @azure/storage-blob since we won't need it anymore. You can also delete this from the package.json. Now import the npm module @aws-sdk/client-s3 in its place with const aws = require("@aws-sdk/client-s3")
This is the only new module you'll need to install. Looking up info for the AWS SDK for JS can be confusing as version 2 and version 3 are drastically different and it can be hard to tell the difference between them from a Google search. I'm using the newer version 3 which is modularized by service.
The Azure example has some error messages to make sure we have the proper environment variables. Make sure to update the error messages to reflect the environment variables specific to our AWS service we set above. Then update the code that extracts the environment variables into global variables. The only one you'll have in common with the Azure example is the port number
const port = process.env.PORT
const accessKeyId = process.env.AWS_ACCESS_KEY_ID
const secretAccessKey = process.env.AWS_SECRET_ACCESS_KEY
const bucketName = process.env.AWS_BUCKET
Next, I initialized the s3 client and connected it with my AWS account with
const s3Client = new aws.S3Client({
region: 'us-east-2',
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
});
As always, make sure your region matches the region you created your bucket in. We now have a connection with our AWS account. Now we need to add some details about the specific bucket we want to access and the file name we want to access. We can parameterize this data as detailed in the SDK documentation by creating an function that takes in the file name like so below
function getObjectParams(Key) {
return {
Bucket: bucketName,
Key
}
};
The Key is the file name that I renamed the video file to when I uploaded it to my bucket. Now the SDK has the proper credentials to access the bucket, knows which universally unique bucket name to search in and knows the file name to fetch. All we need to do now is actually fetch the file.
Here is a simple function to fetch the file from s3
async function fetchFileFromS3(fileName) {
try {
const { Body, ContentLength } = await s3Client.send(
new aws.GetObjectCommand(getObjectParams(fileName))
)
return { Body, ContentLength }
} catch (e) {
console.error('error fetching file from s3')
console.error(e)
}
}
This uses our s3 client initialized with our AWS credentials and our object params to fetch both the file and the content length, which we'll need for later. Notice how it takes in a fileName and passes it to the getObjectParams function we created earlier. Now all that's left is updating the Express route to use this function to pipe the file to the client
app.get("/video", async (req, res) => {
const videoPath = req.query.path;
console.log(`Streaming video from path ${videoPath}.`);
const videoStream = await fetchFileFromS3(videoPath);
res.writeHead(200, {
"Content-Type": "video/mp4",
"Content-Length": videoStream.ContentLength
})
videoStream.Body.pipe(res);
});
Voila! We can now enjoy our video served from our S3 bucket if we run this container locally. This takes in the videoPath from the URL and queries our S3 bucket to fetch a file with that name. Make sure to install your npm packages by navigating to the root of your project directory then running
npm install
Then give it a whirl by navigating to the root of aws-storage and running
npm start
If you navigate to http://localhost:3000/video?path=sample_video.mp4 you should see your video being served from your s3 bucket!
Now with all the code updated, you just need to update your docker-compose.yml file to use our new container. Change the yaml for your azure-storage container to the following
aws-storage:
image: aws-storage
build:
context: ./aws-storage
dockerfile: Dockerfile
container_name: video-storage
ports:
- "4000:80"
environment:
- PORT=80
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_BUCKET=${AWS_BUCKET}
restart: "no"
This will direct Docker to use your new aws-storage container and set your environment variables for your AWS credentials to their values from the host machine.
There's just one last thing we'll need to change before we can start this docker container. Remember how we renamed the file from SampleVideo_1280x720_1mb.mp4 to sample_video.mp4? Well the file name is hardcoded in the video-streaming microservice. Just navigate to line 38 in video-streaming/src/index.js and change the path to sample_video.mp4. Sure, we could have kept the same name to stick with the book, but this little change helped us understand how to rename files and understand how these microservices work together a little better.
Now start the application from the root of example-2 with
docker compose up --build
And give it a test by navigating to
http://localhost:4001/video?path=sample_video.mp4
Congrats! Now your containers should be working together to serve this video from your S3 bucket.
The rest of this chapter uses example-3 from the GitHub repo as you add a Mongo database to your microservice architecture. Make sure to use sample_video.mp4 as your file name for that section! If you don't download my forked repo, you'll need to copy over your aws-storage code to the new example. Adding the database doesn't add any extra steps in AWS.
Hope this was helpful! Chapter 5 talks about messaging and just introduces RabbitMQ outside of a cloud environment so my next entry will be about chapter 6 where we deploy our microservices to a EKS Kubernetes cluster.