Monday 2 November 2020

Top 10 services running on my RaspberryPi

Hi friends, I believe you are all aware about Raspberry Pi, if not then please feel free to read my previous blog which talks about setting up a RPI 
I have got so fascinated by the Raspberry Pi 3 (RPI3) that I have made my mind to buy RPI4 as well.
Owning a Raspberry Pi opens a Gateway to numerous opportunities and few of them are IOT, Hacking, Home Automation, Development box, etc.

There are so many softwares and Operating Systems that the market has to offer for RPI.

Below are the top 10 essential services that I have installed on my RPI.



1. WebServer - This is the first service that one can try on a Linux server. It can be a simple Apache server or an Apache server installed with PHP Utils and Wordpress, backed with MySQL server to give a full fledged website experience to the users. By default the web server runs on Port 80.

$ sudo apt-get install apache2

2. Transmission - It is a light-weight Torrent client which can be easily accessed via web interface. Even this service can be protected with a password so that only the intended users can access and manage the torrents on your RPI. Trust me managing your torrents and creating rules on them was not so easy before installing this service. By default this service runs on Port 9091.
$ sudo apt-get install transmission-daemon
3. Samba Server - It is a Network File Sharing service. This file sharing service utilizes SMB/CIFS protocol. As SMB (Server Message Block) protocol is widely accepted in Windows. This server can be used to create a NAS server for your home file sharing between connected devices.
$ sudo apt-get install samba samba-common-bin

4. XRDP - It is a VNC server which uses the Microsoft Remote Desktop Protocol to provide graphical login on remote machines. By installing this service on the RPI, you can connect its Graphical interface via any VNC client. No monitor is required, hence savings on the resources. By default, Xrdp listens on the port 3389.
$ sudo apt-get install xrdp

5. RpiTX
- It is a general radio frequency transmitter for Raspberry Pi which doesn't require any other hardware. It can transmit frequencies from 5 KHz up to 1500 MHz.
The most basic application can be setting up a home FM station or to annoy someone by interfering their FM signals in a very close vicinity.
RPiTX is capable of transmitting a recorded IQ file directly. This makes copying things like 433 MHz ISM band remotes significantly easier. One application might be to use RPiTX as an internet connected home automation tool which could control all your wireless devices.
$ git clone https://github.com/F5OEO/rpitx.git

6. OpenVPN - This is the best open source VPN solution for RPI. It proves amazing when you are outside your home and you have to connect to your home's network from your mobile phone to download a file on your NAS server Or you want your partner sitting in some other continent to be a part of your network for some project related work. By default OpenVPN uses the Port 1194.
$ curl -L https://install.pivpn.io | bash

7. PiHole - It is a DNS Server for blocking Ads in your network traffic. You can access the Pi-Hole Web Admin interface at http://192.168.1.2/admin
Its recommended not to install any other service on the RPI if you are running Pi-hole, as its configuration might cause some issue with it. By pairing your Pi-hole with a VPN service on other RPI, you can have ad blocking on your cellular devices on the go by connecting to the VPN.
$ curl -sSL https://install.pi-hole.net | bash

8. Virtual Radar - The RTL-SDR dongle attached to the RPI can be used to locate the position of the aircrafts. Automatic dependent surveillance–broadcast (ADS-B) is the technology with which the aircraft use to broadcast their whereabouts every second. There is a software called as Dump1090 which can be downloaded and configured on the RPI has the capability of translating the ADS-B signals collected from RTL-SDR dongle into useful information about the aircrafts. 
The Dump1090 software will tune the RTL-SDR to receive the signals on 1090 MHz frequency.
Run the below command and open the browser to see the realtime flight status --> http://192.168.1.2:8080/
If not able to get anything clear the cache. Hit refresh.
$ ./dump1090 --interactive --net --net-beast --net-ro-port 31001 &

9. Portainer - It is a web based management UI for Docker hosts. Portainer runs as a lightweight Docker container on your Docker host. It’s a great tool for managing Docker containers graphically from the web browser. Prerequisite: Docker should be installed on RPI.
Run the below command to get the Portainer console up and running on the Port 9000.
$ sudo docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock
-v portainer_data:/data --restart always --name portainer portainer/Portainer

10. Mosquitto - This is a Pub-Sub Broker service which works on MQTT protocol. The MQTT protocol is a "Internet of Things" connectivity protocol. Designed as an extremely lightweight and reliable publish/subscribe messaging transport, it is useful for connections with remote locations where a small code footprint is required and/or network bandwidth is at a premium. By default the broker will start listening on the Port 1883.

$ sudo apt-get install mosquitto

Sunday 25 October 2020

Building CI/CD pipeline in AWS infrastructure



Hello fellow engineers, in this blog we are going to create a basic CI/CD pipeline using Jenkins and Ansible to deploy a WAR application on the Apache Tomcat servers and most importantly we are utilizing AWS cloud platform for setting up this infrastructure.
For testing purpose, we have provisioned 2 EC2 instances in AWS for creating the Jenkins and Ansible servers. Follow the below links for setting up these servers:

We have incorporated the best DevOps practices in building this project. But firstly let's understand what is DevOps and why do we need it.

DevOps is the combination of cultural philosophies, practices, and tools that increases an organization's ability to deliver applications and services at high velocity. It helps in evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes.
This methodology is best suited for the development environment requiring very frequent releases.


Purpose of CI/CD Pipeline

We are delivering a product which is a web application and we have to incorporate lot of changes to the application very frequently like new feature rollouts, UI changes, bugs fixes, etc.
In this scenario, the development happens continuously and updated application code needs to be deployed and tested very frequently.

For overcoming the manual effort for each deployment and we created a uniform process that is continuously integrated with the code repository and continuously deploying the build automatically.
This is the Continuous Integration/Continuous Deployment pipeline OR CI/CD pipeline in short.




Technologies Used:


Source code management: Git
Build management: Maven
Continuous integration: Jenkins
Artifact management: S3 Bucket, Docker Hub
Configuration management: Ansible
Containerization: Docker containers
Cloud: AWS

Understanding the pipeline in steps


  1. Developer commits his/her code into the GIT repository, in which webhook is configured to send the message to trigger the Jenkins job on every successful push in develop branch.
  2. This will trigger the Jenkins job to pull the repository and build the web application package with the help of Maven tool.
  3. This package or the WAR artifact created needs to be deployed on to the fleet of Tomcat servers but before deploying it should be stored in some artifactory to maintain the backup and keep versioning of the product. We have configured the Jenkins to store this WAR artifact on a S3 bucket.
  4. Jenkins sends this file to Ansible server via scp command. As this Tomcat application needs to be deployed on many QA servers, it would be easier and faster way of deploying to a fleet of servers.
  5. Ansible server runs the playbook that builds the customized Docker image of the Tomcat application and uploads it to the Docker Hub private repository.
  6. Ansible runs the another playbook to build the containers with the latest docker image on the QA environment.
  7. Once the deployment is done and the testers have done their Sanity test on the web application. The application gets approval to be deployed on the staging servers and finally on the Production servers.

I hope with this blog I am able to make you guys understand why implementing CI/CD pipeline is important in delivering a final product. Please reach out to me via comments or email if have any questions.

Friday 2 October 2020

AEM as a Service

Adobe Experience Manager is an enterprise Content Management System and Digital asset management. It is a Software as a Service (SaaS) which is a suite of online cloud-based services provided by Adobe for Digital Asset Management.

In this blog, we will learn about the AEM installation process on Linux server and setting up the AEM service to avoid manual effort on starting AEM process every time the server is rebooted.

Once the AEM package is downloaded on your local server. It basically contains AEM Quickstart Jar (AEM_6.x_Quickstart.jar) and a License file which should be placed in the same installation folder on the instance running in Author or Publisher mode.

Installation process


  • Create a installation directory named "aem" under opt directory.
    mkdir -p /opt/aem/

  • Copy the downloaded AEM package into the aem folder:
cp cq-quickstart-6.5.0.jar /opt/aem/cq-quickstart-6.5.0.jar
cp license.properties /opt/aem/license.properties
  • Unpack the JAR file
java -jar /opt/aem/cq-quickstart-6.5.0.jar -unpack
  • Unpacking of the jar files creates a folder named ‘crx-quickstart’ which contains the script to start/stop the AEM instance.
  • Edit start script, if you want to run the aem instance in publisher mode.
CQ_PORT=4503
CQ_RUNMODE='publish'
  • Start the aem process with this script
bash /opt/aem/crx-quickstart/bin/start

Implementation of AEM as a Service


1. Create the following file:

vi /etc/systemd/system/aem.service

[Unit]
Description=AEM Author/Publisher Service

[Service]
Type=simple
ExecStart=/opt/aem/crx-quickstart/bin/start
ExecStop=/opt/aem/crx-quickstart/bin/stop
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

2. Provide the permissions:
chmod 755 aem.service

3. Reload all systemd service files: 
systemctl daemon-reload

4. Check that it is working by starting the service with following commands
systemctl start aem.service
systemctl status aem.service

5. Enable the service to start on boot:
systemctl enable aem.service
Created symlink from /etc/systemd/system/multi-user.target.wants/aem.service to /etc/systemd/system/aem.service.

6. Reboot the server and check the status of AEM service once it boots up again. It should be up and running.
systemctl status aem.service




Tuesday 29 September 2020

AWS CLI - AuthFailure

Hello to my fellow cloud engineers. In this blog we will discuss about the solution of a common AWS issue.

ERRORAn error occurred (AuthFailure) when calling the DescribeInstances operation: AWS was not able to validate the provided access credentials

We get this error mostly because of 2 Reasons either we have provided wrong credentials while configuring CLI or the date/time is set incorrectly on the server.


Prerequisite

Setting up AWS CLI

# pip3 install awscli

# aws configure

Enter the access_key, secret_key and the region and you're done setting up aws cli on your machine.


Test the aws configuration

# aws configure list


# aws ec2 describe-instances


An error occurred (AuthFailure) when calling the DescribeInstances operation: AWS was not able to validate the provided access credentials

I kept wondering why I am getting AuthFailure message as I was able to run the CLI commands on other system with the same credentials. Later I realized that this was happening due to mismatch of system clock time and the actual TimeZone time.




# timedatectl set-timezone 'Asia/Kolkata'

Setting the timezone didn't help as the timezone was already the same but the system time was incorrectly set.

To sync the time I tried to enable the NTP (Network Time Protocol)

# timedatectl set-ntp yes

Failed to set ntp: NTP not supported.

I have to install the NTP service as it was not present in my case.

# yum install ntp

# systemctl start ntpd




# timedatectl set-ntp yes 
(This command ran successfully this time)


Able to run the aws commands through CLI successfully

# aws ec2 start-instances --instance-ids i-07ca6e35eed6c5692



This is how I was able to run AWS CLI commands after resolving the AuthFailure issue. I will be glad if this has resolved your issue too, else you can reach out to me through comments/email will be happy to help.

Friday 17 January 2020

Touchbase with AWS CLI

Hey fellas, wishing you all Happy New Year. Today we are going to learn how to create simple scripts in AWS cli to gather important information of our deployed infrastructure.

The AWS management console is a very convenient way of gathering all this data. But when we require this information as some form of input for other reports/scripts, then AWS CLI comes into play.

AWS CLI vs SDK

AWS CLI acts like an API for communicating with the AWS services from the command line interface.
When an application needs to interact with AWS services, then we have to import AWS SDK in our code to communicate with the services.

Let's begin to extract all possible information from AWS command line interface aka CLI. By the end of this you will be accustomed with AWS CLI.

#Prerequisites

AWS CLI should be configured on your local machine with the AWS user having atleast read only permissions for the module that you are going to explore (EC2,RDS,S3)
If you are not able to run that command then you will be greeted with a message to contact your IAM administrator for granting privileges.

##Installation of AWS CLI tool

$ sudo pip install awscli

##Configuring AWS 

$ aws configure AWS Access Key ID [None]: DSLH546VSFAP AWS Secret Access Key [None]: jeoHhzr4FeEd9o6aOBicTK7kRvpJu5JRfFKFCe4yXw Default region name [None]: ap-south-1 Default output format [None]: <leave blank for default JSON format>


Required permission for accessing EC2 instances: AmazonEC2ReadOnlyAccess




Let us get our hands dirty and begin with the CLI commands to get the desired output:


Use case 1: To describe all of the EC2 instances in any of the mentioned output format.

aws ec2 describe-instances --output <table/json/text>

--output parameter: There are 3 formats in which we can get the output through CLI (table/json/text). the default output format is JSON, if we do not mention anything.


Use case 2: To describe all of the EC2 instances and filter out the information required like: InstanceId, Name, IpAddress, Public DNS address, Current State

--query parameter: Use to filter out the desired fields from the output

aws ec2 describe-instances --query "Reservations[*].Instances[*].{name: Tags[?Key=='Name'] | [0].Value, instance_id: InstanceId, ip_address: PrivateIpAddress, public_dns_name: PublicDnsName, state: State.Name}" --output table



  We can create alias if the command has to be used very frequently. Place the below lines under .bash_aliases file in the user home directory.

$ nano ~/.bash_aliases

alias aws_instances='aws ec2 describe-instances --query "Reservations[*].Instances[*].{name:Tags[?Key=='Name']|[0].Value, instance_id:InstanceId, state:State.Name, privateIP:PrivateIpAddress, AZ:Placement.AvailabilityZone, attached_vol:BlockDeviceMappings[0].Ebs.VolumeId, Security_grp:SecurityGroups[0].GroupName}" --output table'

$ source .bash_aliases





Use case 3: To describe all the mounted Volumes on all of the EC2 instances and filter out the Volume ID, Instance ID, Availability Zone, Size (GB) and Snapshot IDs

aws ec2 describe-volumes --query 'Volumes[*].[VolumeId, Attachments[0].InstanceId, AvailabilityZone, Size, SnapshotId, FakeKey]' --output text

We can attach/detach, create/delete volumes from EC2 instances. Also we can create snapshots.

Use case 4: To describe all the DB instances created under RDS and modifying their storage.

aws rds describe-db-instances --db-instance-identifier mydbinstance 


aws rds modify-db-instance \
--db-instance-identifier mydbinstance \
--allocated-storage 60 \
--apply-immediately


Use case 5: Creating an Instance from AWS CLI (for this we need EC2 admin privileges)

aws ec2 run-instances --image-id <ami-xxxxxxxx> --count 1 --instance-type t2.micro --key-name <MyKeyPair> --security-group-ids <sg-903004f8> --subnet-id <subnet-6e7f829e>
--iam-instance-profile  Name=ecsInstanceRole
--user-data file://bootstrap.txt (name of the file that has the bootstrap commands)


Overall I have looked into many basic commands under different modules EC2, RDS, S3 and ELB. The AWS CLI is a very deep and vast topic to cover on a blog.
We can control almost every service hosted on AWS through this powerful tool. Hope this blog has given you some motivation to take a deeper dive into the advance usage of AWS CLI.
See you folks on my next blog, till then keep learning.

Friday 13 September 2019

Docker in a glimpse

Docker is a great tool for building micro-services, allowing you to create cloud-based applications and systems.
It is a container management service which is based on the concept of Develop, Ship and Run anywhere. To develop apps and ship them into containers which can be deployed on any platform.

With the initial release of Docker technology in March 2013, there was an embarkment of new technical term - Containerization. It has started revolutionizing the old concept of virtualization, where we need to build a complete Guest OS on top of a host OS to run an application.

Virtual machines are resource-intensive as they consumes a lot of resources (compute, memory,etc) in runtime whereas containers are lightweight and boots up in seconds.

Virtualization vs Containerization





Architecture of Docker


The basic architecture of Docker is a Client-Server architecture and consists of 3 major parts:

1. Docker Host - Docker Host runs the Docker Daemon. Docker Daemon listens for Docker requests. Docker requests could be ‘docker run’, ‘docker build’, anything.
It manages docker objects such as images, containers, networks, and volumes.

2. Docker Client - Docker Client is used to trigger Docker commands. It sends the Docker commands through a CLI to the Docker Daemon. It can communicate with more than one daemon.

3. Registry - The Registry is a stateless, highly scalable server-side application that stores and lets you distribute Docker images. You can create your own image and upload it to Docker Hub or any other specified registry.
When we run the command docker pull or docker run, the required images are pulled from your configured registry.





What are containers?


A container is a special type of process that is isolated from other processes. Containers are assigned resources that no other process can access, and they cannot access any resources that are not explicitly assigned to them.


The technology behind the containers!!


Docker containers are evolved from LXC (Linux Containers)
LXC is the well-known and heavily tested low-level Linux container runtime. It is in active development since 2008 and has implemented various well-known containerization features inside the Linux kernel.
The goal of LXC is to create an environment as close as possible to a standard Linux installation but without the need for a separate kernel. To read more about LXC here.

Docker Daemon/Engine is evolved from LXD (LXD daemon)
LXD is a next generation system container manager. It offers a completely fresh and intuitive user experience with a single command line tool to manage your containers. Containers can be managed over the network in a transparent way through a REST API.


Features of Containers:



  • Complete isolation - Two containerized processes can run side-by-side on the same computer, but they can’t interfere with each other. They can’t access each other’s data unless explicitly configured to do so.
  • Shared physical infrastructure - We don't need separate hardware for running different applications. All can share the same hardware. Shared hardware means lower costs.
  • More Secure - Since there is sharing of hardware only but the processes and data being isolated so it becomes very secure.
  • Faster scaling of applications - The creation and destroying time for containers is negligible. Even there is no need to purchase the physical infrastructure for scaling up of an application like it used to happen many years ago.



The future of Docker


As the technology shifted from mainframes to PC, Baremetal to Virtualization, Datacenters to Cloud. Now is the time for moving from host to containers (going serverless).
As per the trend analysis, By 2020, more than 50% of the global organizations will be running containers in production.

After going through many articles, I can infer that trend in technology has inclined more towards Kubernetes after DockerCon 2017 as the Swarm (Docker's inhouse container orchestration tool) started seeing a tough competition.

Though the simplicity of the Docker Swarm as the container orchestrator that has taken Docker to this level.

We haven't seen any recent development in Docker Swarm repository from quite a long time. (https://github.com/docker/swarm/wiki)

Even Docker itself has adopted Kubernetes as a container orchestrator.

Is this an indication that the Docker Swarm will be soon out of the picture as more and more industries start adopting Kubernetes in their architecture.

Needless to say that the docker has survived till now and will keep running as an organization, regardless of the speculations that some big organizations will acquire it. There are a lot of new upcoming features and development still going on.

Incorporating cgroups v2 will give Docker better resource isolation and management capabilities.
Adopting P2P model for image delivery and distribute images using something like BitTorrent sync.

Referenced Articles:



See you in the upcoming tutorials for taking a deeper dive into Docker.

Monday 2 September 2019

ESP8266 IOT sensing with Blynk

Basically the internet of things, or IoT, is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.

Evolution of IOT


The internet of things is also a natural extension of SCADA (supervisory control and data acquisition), a category of software application program for process control, the gathering of data in real time from remote locations to control equipment and conditions.
SCADA systems include hardware and software components. The hardware gathers and feeds data into a computer that has SCADA software installed, where it is then processed and presented it in a timely manner. The evolution of SCADA is such that late-generation SCADA systems developed into first-generation IoT systems.


Designing of our first IOT system


Here we are using ESP8266 Nodemcu module which collects the realtime data from the temperature/humidity sensor (DHT-22) and the soil moisture sensor once every second.

I have created this arduino code where we are collecting this sensor data and sending it over to Blynk hosted API with the help of Blynk libraries.

  1. Download the latest library Blynk_Release_vXX.zip file from the GitHub page and extract the zip file to the Library location of Arduino software.
  2. Download Blynk App on Android or iOS device and follow these steps for getting started.
  3. Download the code from my github repository.
git clone https://github.com/saurabh221089/esp8266_iot_blynk.git

Generate your auth token from app and update it in the code.
char auth[] = "53e4da8793764b6197fc44a673ce4e21";
Change you SSID and Password in the code and flash this code on your ESP module.
char ssid[] = "wifi4u";  //Enter your WIFI Name
char pass[] = "abcd1234";  //Enter your WIFI Password

In the created project on the app, assign the Virtual Pins to the Variables in the program to PULL the data from the API and show it on the app dashboard. 

Blynk.virtualWrite(V5, h);  //V5 is for Humidity
Blynk.virtualWrite(V6, t);  //V6 is for Temperature
Blynk.virtualWrite(V7, m);  //V7 is for Soil Moisture

Flash this code on your NodeMCU module and run the app on the same time to start monitoring.

If you just want to build a static weather monitoring module without any networking involved. You can create a weather monitoring station with ESP8266 and an OLED display.

All the best my friends for your exploration into IOT world. Happy IOTing!!

ESP8266 weather monitor

This tutorial will help you setup your own Weather Monitor with Nodemcu development board on which ESP8266 module is sitting as a microcontroller.

We are going to build a Weather monitor that will show all the vital stats for my home plantation.
Temperature (C), Humidity, Heat Index and Soil moisture percentage. This has helped me to maintain the perfect levels for apt growth of indoor plants.

What is an ESP8266?


The ESP8266 is a System on a Chip (SoC), manufactured by the Chinese company Espressif. It consists of a Tensilica L106 32-bit micro controller unit (MCU) and a Wi-Fi transceiver. It has 11 GPIO pins* (General Purpose Input/Output pins), and an analog input as well. This means that you can program it like any normal Arduino or other microcontroller.

And on top of that, you get Wi-Fi communication, so you can use it to connect to your Wi-Fi network, connect to the Internet, host a web server with real webpages, let your smartphone connect to it, etc. The possibilities are endless! It's no wonder that this chip has become the most popular IOT device available in the market today.

Pre-Requirements


1. Arduino software to upload the .ino sketch on the nodemcu
2. To program the ESP8266, you'll need a plugin for the Arduino IDE, it can be downloaded from GitHub manually, but it is easier to just add the URL in the Arduino IDE:
  1. Open the Arduino IDE.
  2. Go to File > Preferences.
  3. Paste the URL http://arduino.esp8266.com/stable/package_esp8266com_index.json into the Additional Board Manager URLs field.(You can add multiple URLs, separating them with commas.)
  4. Go to Tools > Board > Board Manager and search for 'esp8266'. Select the newest version, and click install. (As of Sep 1st 2019, the latest stable version is 2.5.0.)

Components Required


  • NodeMCU (ESP8266 development board)
  • OLED 128x64 screen
  • DHT-22 temperature/humdity sensor
  • Capacitive Soil Moisture sensor
  • Few jumper cables to connect the components and a breadboard.

Github Repo to clone this project and code




Schematic diagram of the complete circuit



If you want to monitor the weather on your Android device when you are outside your home and want to leverage the full potential of ESP8266 module. You can create an IOT enabled weather monitoring station with ESP8266, DHT-22 sensor and Blynk library and monitor the weather from anywhere around the world.

Monday 26 August 2019

Role of Ansible in DevOps


Ansible is an open source automation platform. Ansible can help you with configuration management, application deployment, task automation and also IT orchestration.
It is very simple to setup, efficient and powerful tool to make IT professional's life easy.

Features of Ansible:

  1. Ansible uses YAML syntax to define playbook configuration files which is having minimum syntax and easy to understand by humans. 
  2. Write few lines of code to manage and provision your infrastructure.
  3. The work velocity of developers were affected since sysadmins were taking time to configure servers.
  4. Roll-ups and Roll-backs are possible if we want to go back to Java 1.8 from Java 2.0.
  5. Grouping of Servers for partial deployment, let's say on 40 servers out of 100 servers.
  6. Uses push based configuration methodology, where no agent needs to be installed on slaves.
  7. Agent-less configuration tool unlike Puppet & Chef that uses Pull-based (agent-based) config methodology.
  8. Ansible Tower is a GUI based tool where deployment can be done from UI for Large scale enterprises.


Sample Ansible Playbook config file created to make understand the syntax:


Download the sample file - Click Here


Playbooks are simple files written in YAML code. Used to declare configurations and launching task synchronously and asynchronously.
A YAML file will always start with '---' three hyphens.
Indentation matters a lot in YAML files. Running an incorrectly indentated file will run into error and you'll have to spend a lot of time in checking for that extra space.


Hosts is simply a remote machine that Ansible manages. They can have individual variables assigned to them, and can also be organized in groups.


Tasks combine an action with a name. Playbooks exist to run tasks.


Action is a part of a task that specifies which of the modules to run and which arguments to pass to that module. Each task can have only one action, but it may also have other parameters.


Notify is the act of a task registering a change event and informing a handler task that another action needs to be run at the end of the play. If a handler is notified by multiple tasks, it will still be run only once.


Handlers are just like task but they will only run when notified by successful completion of that task.
Handlers are run in the order they are listed, not in the order that they are notified. They are placed at same level (identation) as hosts and tasks in YAML.

-----------------------------------------------------------------------------------------

We can execute commands on remote machines via 2 commands

1. Directly by issuing command
ansible all -m copy -a "src=/home/user1/test.html dest=/home/user2"

2. By creating a playbook file and running it.
ansible-playbook /opt/playbooks/copyfile.yml


##To check the syntax of playbook.yml file

ansible-playbook test-playbook.yml --syntax-check


##To check the list of modules that comes installed with Ansible
(You will be surprised to know that there are around 3000 modules installed by default)
ansible-doc -l


##To test a module named ping, for checking connectivity between controller and slave nodes
ansible -m ping web-servers


If you'll start exploring various modules in ansible, trust me you will fall in love with this tool. As there is nothing which can not be accomplished by it.

Friday 23 August 2019

Setting up an Ansible Server on AWS EC2 instance


To setup Ansible server on EC2 instance we need to first launch an EC2 instance and SSH into it. Now follow the below commands to install Ansible on it.

##Create ansadmin user and update its password 
useradd ansadmin
passwd ansadmin

##Add ansadmin user to sudoers group
echo "ansadmin ALL=(ALL) ALL" >> /etc/sudoers

##sed command replaces "PasswordAuthentication no to yes" without editing sshd_config file
sed -ie 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config

##create ssh keys for password-less authentication between Ansible control server and hosts.

#Login as ansadmin user and generate ssh key on Master
ssh-keygen

#Create same ansadmin user on the target host server.

#Copy Master ssh keys onto all ansible hosts nodes
ssh-copy-id <target-host-server>

#Update target servers IP on /etc/ansible/hosts file on Master (Always use internal Private IP address)
echo "<target host server IP>" >> /etc/ansible/hosts


Ansible hosts file should look like: cat /etc/ansible/hosts

[web-servers]
10.0.1.20
10.0.1.21
10.0.1.22

#Run ansible command as ansadmin user on Control server. It should be successful.
ansible all -m ping


We have now setup a successful passwordless authentication between all the hosts and the control server from where we can handle any type of tasks like installing any application, starting/stopping any service, copying a config file on servers.
This we will discuss in our next blog.