2012-2013
(Marketing )
Periyar University
(Marketing )
Periyar University
(Computer Science )
Shubarti University
(computer science)
shubarti University
Certified AWS DevOps professional; targeting a challenging DevOps Engineer role to apply nearly two decades of experience in infrastructure automation, container orchestration, CI/CD pipeline, security protocols, and data compliance. Aiming to boost operational excellence and drive business growth through a blend of creative thinking, effective problem-solving, and strategic planning.
created python code to achieve the organisation.
This project was regarding deleting unused AWS S3 buckets in the company's AWS account.
Kubernetes deplyement.
Typescript code usage
Pulumi IAAC toolusage
This project was about deploying an AWS ECR image to K8S using Pulumi typescript.
VPC COnfiguration
AWS Lambda using python
This project consists of sample code to seamlessely connect Amazon ECS and AWS Lambda workloads using Amazon VPC Lattice.
Our process typically takes 15-20 minutes for most of our services. We achieve this because we use GitLab services for our code repository, which comes with the Gitlab Runner feature. However, you need to have the runner itself for this feature to work. You can either use Gitlab's own shared runner or use your own runner. We use our own runner because we want better performance for our CI process. We configure the runner resources (CPU and memory) as per our requirements to ensure top-notch performance. Moreover, we ensure that secret credentials remain on our AWS environment. We use our Gitlab runner, which is the best option out there.
Autoscale requires one master server that has the task to launch a new runner if there is a new process that needs to be handled. In our Autoscale configuration, we choose Docker as our runner options. Autoscale also allows us to be more efficient in terms of cost since it only launches a new runner server when it is needed. We set the timeout of the runner really short, around 5 minutes, which means if there are no more jobs after 5 minutes of being idle, the server will automatically shut down. This configuration has a drawback. It will increase the cold-start time when there is a job available and there is no active runner server. However, we have a very small engineering team and we currently don't mind with this since our deployment rate is still small. Since we implemented this architecture 2 months ago, we have only made around 700 deployment processes (~11 deployments/day).
Moving on, we have a container registry where after the Docker build process is done, it produces an image which we push to ECR and then deploy it to ECS. The setup of ECR and ECS is pretty simple since they both are AWS services. On ECS, we want to simplify things, that's why we choose EC2 as our container cluster. It's higher cost but lower maintenance work is required.
In Gitlab, you need a configuration file called gitlab-ci.yml, which we use to configure each step of the Continuous Integration (CI) process. In our setup, we have 3 steps: test, build, and deploy. So, our configuration looks like this:
stages:
- test
- build
- deploy
Our process ensures that we deliver high-quality services quickly and efficiently.
resources to create gitlab ci/cd pipeline on gitlab for ecs, Postgres and Django git-cheat-sheet.pdf Gitlab CI with ECS The flow is pretty much lHere's a clearer version of the text:
To create a GitLab CI/CD pipeline on GitLab for ECS, Postgres, and Django, you'll need to refer to the git-cheat-sheet.pdf. Here's the basic flow:
1. Push changes to your code repository.
2. GitLab will create a new pipeline and notify the Master Runner (which is hosted on AWS EC2).
3. The Master Runner will then launch a new GitLab Runner server or use an existing one (also on AWS EC2).
4. GitLab Runner will run tests.
5. Once the tests are complete, the Docker build process will begin.
6. Once the build is complete, the Docker image will be stored in the AWS Elastic Container Registry (ECR) and the deployment process will start.ike this: Push code changes to the repository. GitLab creates new pipeline and notify the Master Runner. (AWS EC2). Master Runner launch new Gitlab Runner server or use the existing one. (AWS EC2). Gitlab Runner run test. Test done and start Docker build process. Build process done, store Docker image to AWS Elastic Container Registry (ECR) and start deployment process.
