Monday 2 November 2020

Top 10 services running on my RaspberryPi

Hi friends, I believe you are all aware about Raspberry Pi, if not then please feel free to read my previous blog which talks about setting up a RPI 
I have got so fascinated by the Raspberry Pi 3 (RPI3) that I have made my mind to buy RPI4 as well.
Owning a Raspberry Pi opens a Gateway to numerous opportunities and few of them are IOT, Hacking, Home Automation, Development box, etc.

There are so many softwares and Operating Systems that the market has to offer for RPI.

Below are the top 10 essential services that I have installed on my RPI.



1. WebServer - This is the first service that one can try on a Linux server. It can be a simple Apache server or an Apache server installed with PHP Utils and Wordpress, backed with MySQL server to give a full fledged website experience to the users. By default the web server runs on Port 80.

$ sudo apt-get install apache2

2. Transmission - It is a light-weight Torrent client which can be easily accessed via web interface. Even this service can be protected with a password so that only the intended users can access and manage the torrents on your RPI. Trust me managing your torrents and creating rules on them was not so easy before installing this service. By default this service runs on Port 9091.
$ sudo apt-get install transmission-daemon
3. Samba Server - It is a Network File Sharing service. This file sharing service utilizes SMB/CIFS protocol. As SMB (Server Message Block) protocol is widely accepted in Windows. This server can be used to create a NAS server for your home file sharing between connected devices.
$ sudo apt-get install samba samba-common-bin

4. XRDP - It is a VNC server which uses the Microsoft Remote Desktop Protocol to provide graphical login on remote machines. By installing this service on the RPI, you can connect its Graphical interface via any VNC client. No monitor is required, hence savings on the resources. By default, Xrdp listens on the port 3389.
$ sudo apt-get install xrdp

5. RpiTX
- It is a general radio frequency transmitter for Raspberry Pi which doesn't require any other hardware. It can transmit frequencies from 5 KHz up to 1500 MHz.
The most basic application can be setting up a home FM station or to annoy someone by interfering their FM signals in a very close vicinity.
RPiTX is capable of transmitting a recorded IQ file directly. This makes copying things like 433 MHz ISM band remotes significantly easier. One application might be to use RPiTX as an internet connected home automation tool which could control all your wireless devices.
$ git clone https://github.com/F5OEO/rpitx.git

6. OpenVPN - This is the best open source VPN solution for RPI. It proves amazing when you are outside your home and you have to connect to your home's network from your mobile phone to download a file on your NAS server Or you want your partner sitting in some other continent to be a part of your network for some project related work. By default OpenVPN uses the Port 1194.
$ curl -L https://install.pivpn.io | bash

7. PiHole - It is a DNS Server for blocking Ads in your network traffic. You can access the Pi-Hole Web Admin interface at http://192.168.1.2/admin
Its recommended not to install any other service on the RPI if you are running Pi-hole, as its configuration might cause some issue with it. By pairing your Pi-hole with a VPN service on other RPI, you can have ad blocking on your cellular devices on the go by connecting to the VPN.
$ curl -sSL https://install.pi-hole.net | bash

8. Virtual Radar - The RTL-SDR dongle attached to the RPI can be used to locate the position of the aircrafts. Automatic dependent surveillance–broadcast (ADS-B) is the technology with which the aircraft use to broadcast their whereabouts every second. There is a software called as Dump1090 which can be downloaded and configured on the RPI has the capability of translating the ADS-B signals collected from RTL-SDR dongle into useful information about the aircrafts. 
The Dump1090 software will tune the RTL-SDR to receive the signals on 1090 MHz frequency.
Run the below command and open the browser to see the realtime flight status --> http://192.168.1.2:8080/
If not able to get anything clear the cache. Hit refresh.
$ ./dump1090 --interactive --net --net-beast --net-ro-port 31001 &

9. Portainer - It is a web based management UI for Docker hosts. Portainer runs as a lightweight Docker container on your Docker host. It’s a great tool for managing Docker containers graphically from the web browser. Prerequisite: Docker should be installed on RPI.
Run the below command to get the Portainer console up and running on the Port 9000.
$ sudo docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock
-v portainer_data:/data --restart always --name portainer portainer/Portainer

10. Mosquitto - This is a Pub-Sub Broker service which works on MQTT protocol. The MQTT protocol is a "Internet of Things" connectivity protocol. Designed as an extremely lightweight and reliable publish/subscribe messaging transport, it is useful for connections with remote locations where a small code footprint is required and/or network bandwidth is at a premium. By default the broker will start listening on the Port 1883.

$ sudo apt-get install mosquitto

Sunday 25 October 2020

Building CI/CD pipeline in AWS infrastructure



Hello fellow engineers, in this blog we are going to create a basic CI/CD pipeline using Jenkins and Ansible to deploy a WAR application on the Apache Tomcat servers and most importantly we are utilizing AWS cloud platform for setting up this infrastructure.
For testing purpose, we have provisioned 2 EC2 instances in AWS for creating the Jenkins and Ansible servers. Follow the below links for setting up these servers:

We have incorporated the best DevOps practices in building this project. But firstly let's understand what is DevOps and why do we need it.

DevOps is the combination of cultural philosophies, practices, and tools that increases an organization's ability to deliver applications and services at high velocity. It helps in evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes.
This methodology is best suited for the development environment requiring very frequent releases.


Purpose of CI/CD Pipeline

We are delivering a product which is a web application and we have to incorporate lot of changes to the application very frequently like new feature rollouts, UI changes, bugs fixes, etc.
In this scenario, the development happens continuously and updated application code needs to be deployed and tested very frequently.

For overcoming the manual effort for each deployment and we created a uniform process that is continuously integrated with the code repository and continuously deploying the build automatically.
This is the Continuous Integration/Continuous Deployment pipeline OR CI/CD pipeline in short.




Technologies Used:


Source code management: Git
Build management: Maven
Continuous integration: Jenkins
Artifact management: S3 Bucket, Docker Hub
Configuration management: Ansible
Containerization: Docker containers
Cloud: AWS

Understanding the pipeline in steps


  1. Developer commits his/her code into the GIT repository, in which webhook is configured to send the message to trigger the Jenkins job on every successful push in develop branch.
  2. This will trigger the Jenkins job to pull the repository and build the web application package with the help of Maven tool.
  3. This package or the WAR artifact created needs to be deployed on to the fleet of Tomcat servers but before deploying it should be stored in some artifactory to maintain the backup and keep versioning of the product. We have configured the Jenkins to store this WAR artifact on a S3 bucket.
  4. Jenkins sends this file to Ansible server via scp command. As this Tomcat application needs to be deployed on many QA servers, it would be easier and faster way of deploying to a fleet of servers.
  5. Ansible server runs the playbook that builds the customized Docker image of the Tomcat application and uploads it to the Docker Hub private repository.
  6. Ansible runs the another playbook to build the containers with the latest docker image on the QA environment.
  7. Once the deployment is done and the testers have done their Sanity test on the web application. The application gets approval to be deployed on the staging servers and finally on the Production servers.

I hope with this blog I am able to make you guys understand why implementing CI/CD pipeline is important in delivering a final product. Please reach out to me via comments or email if have any questions.

Friday 2 October 2020

AEM as a Service

Adobe Experience Manager is an enterprise Content Management System and Digital asset management. It is a Software as a Service (SaaS) which is a suite of online cloud-based services provided by Adobe for Digital Asset Management.

In this blog, we will learn about the AEM installation process on Linux server and setting up the AEM service to avoid manual effort on starting AEM process every time the server is rebooted.

Once the AEM package is downloaded on your local server. It basically contains AEM Quickstart Jar (AEM_6.x_Quickstart.jar) and a License file which should be placed in the same installation folder on the instance running in Author or Publisher mode.

Installation process


  • Create a installation directory named "aem" under opt directory.
    mkdir -p /opt/aem/

  • Copy the downloaded AEM package into the aem folder:
cp cq-quickstart-6.5.0.jar /opt/aem/cq-quickstart-6.5.0.jar
cp license.properties /opt/aem/license.properties
  • Unpack the JAR file
java -jar /opt/aem/cq-quickstart-6.5.0.jar -unpack
  • Unpacking of the jar files creates a folder named ‘crx-quickstart’ which contains the script to start/stop the AEM instance.
  • Edit start script, if you want to run the aem instance in publisher mode.
CQ_PORT=4503
CQ_RUNMODE='publish'
  • Start the aem process with this script
bash /opt/aem/crx-quickstart/bin/start

Implementation of AEM as a Service


1. Create the following file:

vi /etc/systemd/system/aem.service

[Unit]
Description=AEM Author/Publisher Service

[Service]
Type=simple
ExecStart=/opt/aem/crx-quickstart/bin/start
ExecStop=/opt/aem/crx-quickstart/bin/stop
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

2. Provide the permissions:
chmod 755 aem.service

3. Reload all systemd service files: 
systemctl daemon-reload

4. Check that it is working by starting the service with following commands
systemctl start aem.service
systemctl status aem.service

5. Enable the service to start on boot:
systemctl enable aem.service
Created symlink from /etc/systemd/system/multi-user.target.wants/aem.service to /etc/systemd/system/aem.service.

6. Reboot the server and check the status of AEM service once it boots up again. It should be up and running.
systemctl status aem.service




Tuesday 29 September 2020

AWS CLI - AuthFailure

Hello to my fellow cloud engineers. In this blog we will discuss about the solution of a common AWS issue.

ERRORAn error occurred (AuthFailure) when calling the DescribeInstances operation: AWS was not able to validate the provided access credentials

We get this error mostly because of 2 Reasons either we have provided wrong credentials while configuring CLI or the date/time is set incorrectly on the server.


Prerequisite

Setting up AWS CLI

# pip3 install awscli

# aws configure

Enter the access_key, secret_key and the region and you're done setting up aws cli on your machine.


Test the aws configuration

# aws configure list


# aws ec2 describe-instances


An error occurred (AuthFailure) when calling the DescribeInstances operation: AWS was not able to validate the provided access credentials

I kept wondering why I am getting AuthFailure message as I was able to run the CLI commands on other system with the same credentials. Later I realized that this was happening due to mismatch of system clock time and the actual TimeZone time.




# timedatectl set-timezone 'Asia/Kolkata'

Setting the timezone didn't help as the timezone was already the same but the system time was incorrectly set.

To sync the time I tried to enable the NTP (Network Time Protocol)

# timedatectl set-ntp yes

Failed to set ntp: NTP not supported.

I have to install the NTP service as it was not present in my case.

# yum install ntp

# systemctl start ntpd




# timedatectl set-ntp yes 
(This command ran successfully this time)


Able to run the aws commands through CLI successfully

# aws ec2 start-instances --instance-ids i-07ca6e35eed6c5692



This is how I was able to run AWS CLI commands after resolving the AuthFailure issue. I will be glad if this has resolved your issue too, else you can reach out to me through comments/email will be happy to help.

Friday 17 January 2020

Touchbase with AWS CLI

Hey fellas, wishing you all Happy New Year. Today we are going to learn how to create simple scripts in AWS cli to gather important information of our deployed infrastructure.

The AWS management console is a very convenient way of gathering all this data. But when we require this information as some form of input for other reports/scripts, then AWS CLI comes into play.

AWS CLI vs SDK

AWS CLI acts like an API for communicating with the AWS services from the command line interface.
When an application needs to interact with AWS services, then we have to import AWS SDK in our code to communicate with the services.

Let's begin to extract all possible information from AWS command line interface aka CLI. By the end of this you will be accustomed with AWS CLI.

#Prerequisites

AWS CLI should be configured on your local machine with the AWS user having atleast read only permissions for the module that you are going to explore (EC2,RDS,S3)
If you are not able to run that command then you will be greeted with a message to contact your IAM administrator for granting privileges.

##Installation of AWS CLI tool

$ sudo pip install awscli

##Configuring AWS 

$ aws configure AWS Access Key ID [None]: DSLH546VSFAP AWS Secret Access Key [None]: jeoHhzr4FeEd9o6aOBicTK7kRvpJu5JRfFKFCe4yXw Default region name [None]: ap-south-1 Default output format [None]: <leave blank for default JSON format>


Required permission for accessing EC2 instances: AmazonEC2ReadOnlyAccess




Let us get our hands dirty and begin with the CLI commands to get the desired output:


Use case 1: To describe all of the EC2 instances in any of the mentioned output format.

aws ec2 describe-instances --output <table/json/text>

--output parameter: There are 3 formats in which we can get the output through CLI (table/json/text). the default output format is JSON, if we do not mention anything.


Use case 2: To describe all of the EC2 instances and filter out the information required like: InstanceId, Name, IpAddress, Public DNS address, Current State

--query parameter: Use to filter out the desired fields from the output

aws ec2 describe-instances --query "Reservations[*].Instances[*].{name: Tags[?Key=='Name'] | [0].Value, instance_id: InstanceId, ip_address: PrivateIpAddress, public_dns_name: PublicDnsName, state: State.Name}" --output table



  We can create alias if the command has to be used very frequently. Place the below lines under .bash_aliases file in the user home directory.

$ nano ~/.bash_aliases

alias aws_instances='aws ec2 describe-instances --query "Reservations[*].Instances[*].{name:Tags[?Key=='Name']|[0].Value, instance_id:InstanceId, state:State.Name, privateIP:PrivateIpAddress, AZ:Placement.AvailabilityZone, attached_vol:BlockDeviceMappings[0].Ebs.VolumeId, Security_grp:SecurityGroups[0].GroupName}" --output table'

$ source .bash_aliases





Use case 3: To describe all the mounted Volumes on all of the EC2 instances and filter out the Volume ID, Instance ID, Availability Zone, Size (GB) and Snapshot IDs

aws ec2 describe-volumes --query 'Volumes[*].[VolumeId, Attachments[0].InstanceId, AvailabilityZone, Size, SnapshotId, FakeKey]' --output text

We can attach/detach, create/delete volumes from EC2 instances. Also we can create snapshots.

Use case 4: To describe all the DB instances created under RDS and modifying their storage.

aws rds describe-db-instances --db-instance-identifier mydbinstance 


aws rds modify-db-instance \
--db-instance-identifier mydbinstance \
--allocated-storage 60 \
--apply-immediately


Use case 5: Creating an Instance from AWS CLI (for this we need EC2 admin privileges)

aws ec2 run-instances --image-id <ami-xxxxxxxx> --count 1 --instance-type t2.micro --key-name <MyKeyPair> --security-group-ids <sg-903004f8> --subnet-id <subnet-6e7f829e>
--iam-instance-profile  Name=ecsInstanceRole
--user-data file://bootstrap.txt (name of the file that has the bootstrap commands)


Overall I have looked into many basic commands under different modules EC2, RDS, S3 and ELB. The AWS CLI is a very deep and vast topic to cover on a blog.
We can control almost every service hosted on AWS through this powerful tool. Hope this blog has given you some motivation to take a deeper dive into the advance usage of AWS CLI.
See you folks on my next blog, till then keep learning.