Indeed, using Docker containers and exposing them to the internet can be a cost-effective and flexible approach, especially for experimental purposes. It allows you to easily deploy and manage your applications while maintaining control over resource allocation, in this project i was working as a cloud developers.
Project scope
In my role as a cloud developer, my responsibilities included working with Docker for containerization, developing backend APIs, and managing servers. Additionally, I implemented automation using tools like Ansible and Jenkins to streamline the deployment and maintenance of the entire application stack. Furthermore, I created APIs using Express.js to facilitate communication between the backend systems and the container machines themselves.
Implementation
Implement Jenkins API Endpoint
Setting up and configuring Jenkins servers in our infrastructure. This involved creating a secure environment with authentication measures in place to ensure that only authorized personnel could access and manage Jenkins. To enhance security and control, we also established APIs with proper authentication mechanisms. These APIs played a crucial role in our workflow. They allowed our backend systems to request and obtain the necessary credentials for accessing containers.
Automate Image Containerization with Jenkins & Ansible
an automated process using Ansible to create a customized Ubuntu image. This image is configured to allow SSH access, set up container IP addresses and ports, and is then transmitted to the backend using an API. This streamlined approach ensures that containers can be brought up quickly and efficiently when needed.
here's a breakdown of the process:
-
Custom Ubuntu Image Creation: We start by using Ansible playbooks to build a custom Ubuntu image tailored to our specific requirements. This image is preconfigured with SSH access, ensuring that we have the necessary connectivity to manage and interact with the containers.
-
Container IP and Port Configuration: Once the custom image is ready, Ansible is used to automate the process of configuring IP addresses and ports for the containers. This step ensures that each container is properly networked and accessible.
-
API Integration: We've established an API that serves as a bridge between our backend systems and the containerization infrastructure. Ansible is utilized to interact with backend API, and send necessary credentials for accessing containers.
-
Automation for Rapid Deployment: By automating these steps with Ansible, we significantly reduce the time and effort required to bring up containers. This ensures that our infrastructure can scale quickly and efficiently to meet demand, making our operations more agile and responsive.
Config Outbound Connection from Host to Containers
To optimize cost efficiency for our instances, we've implemented a streamlined approach. We exclusively utilize domain name access, leveraging the Cloudflare API and Ngrok reverse proxy. This setup allows us to expose containers to our users through a user-friendly domain name.
here's a breakdown of the process:
-
Backend Request: Our backend servers initiate a request to our servers, providing essential information such as the desired subdomain and the associated container ID.
-
Subdomain Request: Upon receiving the information from the backend, our cloud server triggers an Ansible script. This script communicates with the Cloudflare API to request the creation of the specified subdomain.
-
Success Notification: Following the subdomain creation process, the cloud server sends a callback to the backend system, signaling the successful completion of the subdomain creation task.
Systems design
Applications
Terminal on the frontend side once a successful connection has been established.
Configure outbound settings and generate a subdomain for the container.