Useful networking linux commands

Hi readers,

I have talking about cloud, OpenStack for this block readers.

To earn more knowledge about cloud and OpenStack you have to have good knowledge of networking.

Following are the basic networking commands for the linux which you must aware about.

 

ifconfig –

fconfig (interface configurator) command is use to initialize an interface, assign IP Address to interface and enable or disable interface on demand. With this command you can view IP Address and Hardware / MAC address assign to interface and also MTU (Maximum transmission unit) size.

 

ifconfig interface –

e.g.  ifconfig eth0 –

fconfig with interface (eth0) command only shows specific interface details like IP Address, MAC Address etc. with -a options will display all available interface details if it is disable also.

 

ping ip address/url –

e.g. ping google.com or ping x.x.x.x –

PING (Packet INternet Groper) command is the best way to test connectivity between two nodes. Whether it is Local Area Network (LAN) or Wide Area Network (WAN). Ping use ICMP (Internet Control Message Protocol) to communicate to other devices

 

traceroute-

e.g. traceroute ip address

traceroute is a network troubleshooting utility which shows number of hops taken to reach destination also determine packets traveling path. Below we are tracing route to global DNS server IP Address and able to reach destination also shows path of that packet is traveling.

 

netstat-

e.g. netstat -tnlp

It is very imp command to check your desired port listing desire services.

Netstat (Network Statistic) command display connection info, routing table information etc. To displays routing table information use option as -r.

 

dig-

e.g.- dig http://www.google.com

Dig (domain information groper) query DNS related information like A Record, CNAME, MX Record etc. This command mainly use to troubleshoot DNS related query.

 

route-

e.g-

route

route command also shows and manipulate ip routing table. To see default routing table in Linux, type the following command.

 

 



 

Advertisements

what is cloud computing in simple words

Hi readers,

Many people talk about cloud computing in detail like virtualization,storage, but very few are able to explain ‘term’ in basic simple words.

Reader of this blog will understand the basic but important concept of cloud computing.

 

What is Cloud Computing-

It is a type of computing groups of remote servers and

software networks that allow centralized data storage and

online access to computer services or resources.

Using cloud computing we can manipulate,configure and

access the applications online.

Delivery of on-demand computing resources.

screenshot-from-2017-01-26-14-20-20

what is opensource and it’s Importance

Hi readers,

I can see currently there are many people are talking about opensource, opensource -projects,techniques, functionality, usability.

Only few have the perfect knowledge of the word/world of opensource.

The reader of this blog will have the good knowledge of opensource.

So go through this blog and earn great knowldge of opensource.

Open-source

Open source refers to a program or software in which the

source code (the form of the program when a programmer

writes a program in a particular programming language) is

available to the general public for use and/or modification from

its original design free of charge

Open source code is typically created as a collaborative effort

in which programmers improve upon the code and share the

changes within the community

Proprietary software is privately owned and controlled. In the

computer industry, proprietary is considered the opposite of

open. A proprietary design or technique is one that is owned

by a company. It also implies that the company has not

divulged specifications that would allow other companies to

duplicate the product

Why to choose open-source

Control

Training

Security

Stability

You can be famous

Own product/project

Understanding code of standard

Helping the people who can’t affort costly software

Freedom

Support option

Quality

some open-source projects

Mozilla

Chromium

Apache

Ubuntu

Python

OpenOffice

Paint.net

Python

Python is an

some basic but imp in python- dir(datatype/varible)

Hi readers,

I am writing this blog  just to convey you, how python is more efficient even you don’t want remember in built functions of the data structure/variable in python.

Lets say a=[1,2,3,4,5]

a is a dictionary.You want to perform some operation using inbuilt function on this list, but you dont know all the in built function related to list.

So, what the reader of this blog do-

he/she should do dir(a)

and he/she will find the list of all the inbuilt functions.

example-

>>> a=[1,2,3,4,5]
>>> a
[1, 2, 3, 4, 5]
>>> dir(a)
[‘__add__’, ‘__class__’, ‘__contains__’, ‘__delattr__’, ‘__delitem__’, ‘__delslice__’, ‘__doc__’, ‘__eq__’, ‘__format__’, ‘__ge__’, ‘__getattribute__’, ‘__getitem__’, ‘__getslice__’, ‘__gt__’, ‘__hash__’, ‘__iadd__’, ‘__imul__’, ‘__init__’, ‘__iter__’, ‘__le__’, ‘__len__’, ‘__lt__’, ‘__mul__’, ‘__ne__’, ‘__new__’, ‘__reduce__’, ‘__reduce_ex__’, ‘__repr__’, ‘__reversed__’, ‘__rmul__’, ‘__setattr__’, ‘__setitem__’, ‘__setslice__’, ‘__sizeof__’, ‘__str__’, ‘__subclasshook__’, ‘append’, ‘count’, ‘extend’, ‘index’, ‘insert’, ‘pop’, ‘remove’, ‘reverse’, ‘sort’]
>>> dir(list)
[‘__add__’, ‘__class__’, ‘__contains__’, ‘__delattr__’, ‘__delitem__’, ‘__delslice__’, ‘__doc__’, ‘__eq__’, ‘__format__’, ‘__ge__’, ‘__getattribute__’, ‘__getitem__’, ‘__getslice__’, ‘__gt__’, ‘__hash__’, ‘__iadd__’, ‘__imul__’, ‘__init__’, ‘__iter__’, ‘__le__’, ‘__len__’, ‘__lt__’, ‘__mul__’, ‘__ne__’, ‘__new__’, ‘__reduce__’, ‘__reduce_ex__’, ‘__repr__’, ‘__reversed__’, ‘__rmul__’, ‘__setattr__’, ‘__setitem__’, ‘__setslice__’, ‘__sizeof__’, ‘__str__’, ‘__subclasshook__’, ‘append’, ‘count’, ‘extend’, ‘index’, ‘insert’, ‘pop’, ‘remove’, ‘reverse’, ‘sort’]
>>> b=1
>>> dir(b)
[‘__abs__’, ‘__add__’, ‘__and__’, ‘__class__’, ‘__cmp__’, ‘__coerce__’, ‘__delattr__’, ‘__div__’, ‘__divmod__’, ‘__doc__’, ‘__float__’, ‘__floordiv__’, ‘__format__’, ‘__getattribute__’, ‘__getnewargs__’, ‘__hash__’, ‘__hex__’, ‘__index__’, ‘__init__’, ‘__int__’, ‘__invert__’, ‘__long__’, ‘__lshift__’, ‘__mod__’, ‘__mul__’, ‘__neg__’, ‘__new__’, ‘__nonzero__’, ‘__oct__’, ‘__or__’, ‘__pos__’, ‘__pow__’, ‘__radd__’, ‘__rand__’, ‘__rdiv__’, ‘__rdivmod__’, ‘__reduce__’, ‘__reduce_ex__’, ‘__repr__’, ‘__rfloordiv__’, ‘__rlshift__’, ‘__rmod__’, ‘__rmul__’, ‘__ror__’, ‘__rpow__’, ‘__rrshift__’, ‘__rshift__’, ‘__rsub__’, ‘__rtruediv__’, ‘__rxor__’, ‘__setattr__’, ‘__sizeof__’, ‘__str__’, ‘__sub__’, ‘__subclasshook__’, ‘__truediv__’, ‘__trunc__’, ‘__xor__’, ‘bit_length’, ‘conjugate’, ‘denominator’, ‘imag’, ‘numerator’, ‘real’]
>>> dir(int)
[‘__abs__’, ‘__add__’, ‘__and__’, ‘__class__’, ‘__cmp__’, ‘__coerce__’, ‘__delattr__’, ‘__div__’, ‘__divmod__’, ‘__doc__’, ‘__float__’, ‘__floordiv__’, ‘__format__’, ‘__getattribute__’, ‘__getnewargs__’, ‘__hash__’, ‘__hex__’, ‘__index__’, ‘__init__’, ‘__int__’, ‘__invert__’, ‘__long__’, ‘__lshift__’, ‘__mod__’, ‘__mul__’, ‘__neg__’, ‘__new__’, ‘__nonzero__’, ‘__oct__’, ‘__or__’, ‘__pos__’, ‘__pow__’, ‘__radd__’, ‘__rand__’, ‘__rdiv__’, ‘__rdivmod__’, ‘__reduce__’, ‘__reduce_ex__’, ‘__repr__’, ‘__rfloordiv__’, ‘__rlshift__’, ‘__rmod__’, ‘__rmul__’, ‘__ror__’, ‘__rpow__’, ‘__rrshift__’, ‘__rshift__’, ‘__rsub__’, ‘__rtruediv__’, ‘__rxor__’, ‘__setattr__’, ‘__sizeof__’, ‘__str__’, ‘__sub__’, ‘__subclasshook__’, ‘__truediv__’, ‘__trunc__’, ‘__xor__’, ‘bit_length’, ‘conjugate’, ‘denominator’, ‘imag’, ‘numerator’, ‘real’]
>>> c=’string’
>>> dir(c)
[‘__add__’, ‘__class__’, ‘__contains__’, ‘__delattr__’, ‘__doc__’, ‘__eq__’, ‘__format__’, ‘__ge__’, ‘__getattribute__’, ‘__getitem__’, ‘__getnewargs__’, ‘__getslice__’, ‘__gt__’, ‘__hash__’, ‘__init__’, ‘__le__’, ‘__len__’, ‘__lt__’, ‘__mod__’, ‘__mul__’, ‘__ne__’, ‘__new__’, ‘__reduce__’, ‘__reduce_ex__’, ‘__repr__’, ‘__rmod__’, ‘__rmul__’, ‘__setattr__’, ‘__sizeof__’, ‘__str__’, ‘__subclasshook__’, ‘_formatter_field_name_split’, ‘_formatter_parser’, ‘capitalize’, ‘center’, ‘count’, ‘decode’, ‘encode’, ‘endswith’, ‘expandtabs’, ‘find’, ‘format’, ‘index’, ‘isalnum’, ‘isalpha’, ‘isdigit’, ‘islower’, ‘isspace’, ‘istitle’, ‘isupper’, ‘join’, ‘ljust’, ‘lower’, ‘lstrip’, ‘partition’, ‘replace’, ‘rfind’, ‘rindex’, ‘rjust’, ‘rpartition’, ‘rsplit’, ‘rstrip’, ‘split’, ‘splitlines’, ‘startswith’, ‘strip’, ‘swapcase’, ‘title’, ‘translate’, ‘upper’, ‘zfill’]

RethinkDB: the open-source database for the realtime web

RethinkDB is an open sourceNoSQL, distributed document-oriented database. It storesJSON documents with dynamic schemas, and is designed to facilitate pushing real-time updates for query results to applications.

 

ReQL is the RethinkDB query language. It offers a very powerful and convenient way to manipulate JSON documents.

 

Instead of polling for changes, the developer can tell RethinkDB to continuously push updated query results in realtime  You can also write applications on top of RethinkDB using traditional query-response paradigm, and subscribe to realtime feeds later as you start adding realtime functionality to your app. offers a much higher level of abstraction. RethinkDB’s feeds integrate seamlessly with the query computation engine, and allow you to subscribe to changes on query results, not just raw replication data. This architecture dramatically reduces the time and effort necessary to build scalable realtime app.

  • An advanced query language that supports table joins, subqueries, and massively parallelized distributed computation.
  • An elegant and powerful operations and monitoring API that integrates with the query language and makes scaling RethinkDB dramatically easier.
  • A simple and beautiful administration UI that lets you shard and replicate in a few clicks, and offers online documentation and query language suggestions.

 

The query-response database access model works well on the web because it maps directly to HTTP’s request-response

 

However, modern applications require sending data directly to the client in realtime. Use cases where companies benefited from RethinkDB’s realtime push architecture 

 

you can also write applications on top of RethinkDB using traditional query-response paradigm, and subscribe to realtime feeds later as you start adding realtime functionality to your app.

 

However, RethinkDB’s query language can do nearly anything SQL can do, including table joins and aggregation functions, and it’s powerful, expressive and easy to learn. ReQL can also do many things SQL can’tdo, including mixing queries with JavaScript expressions and map-reduce.

 

Supports  Client-side triggers through changefeeds.

 

 

When a server fails, it may be because of a network availability issue or something more serious, such as system failure. In a multi-server configuration, where tables have multiple replicas distributed among multiple physical machines, RethinkDB will be able to maintain availability automatically 

 

 

Development support

can see live performance of the cluster using graph on rethinkdb dashboard,able write,view databases easily using dashbaoard.

 

RethinkDB uses three ports to operate—the HTTP web UI port, the client drivers port, and the intracluster traffic port. You can connect the browser to the web UI port to administer the cluster right from your browser, and connect the client drivers to the client driver port to run queries from your application. If you’re running a cluster, different RethinkDB nodes communicate with each other via the intracluster traffic port.

 

 

when to use-

RethinkDB is a great choice when your applications could benefit from realtime feeds to your data.

The query-response database access model works well on the web because it maps directly to HTTP’s request-response. However, modern applications require sending data directly to the client in realtime. Use cases where companies benefited from RethinkDB’s realtime push architecture include

 

 

example-

  • Collaborative web and mobile apps

  • Streaming analytics apps

  • Multiplayer games

  • Realtime marketplaces

  • Connected devices

port- Once RethinkDB is running, you can connect to it at http://localhost:8080, assuming you’ve kept the default port (8080) and it’s running on your local machine.s

 

 

RethinkDB uses a range sharding algorithm parameterized on the table’s primary key to partition the data. When the user states they want a given table to use a certain number of shards, the system examines the statistics for the table and finds the optimal set of split points to break up the table evenly. All sharding is currently done based on the table’s primary key, and cannot be done based on any other attribute (in RethinkDB the primary key and the shard key are effectively the same thing).

What is docker , How to setup docker, HOw to work around docker

Hello Readers,

Let me introduce with ‘Docker’.

Docker is an open source project that automates the deployment of Llinux applications  inside software containers.

 

Please go through the following commands.

Docker
setup docker on ubuntu 14.04
 apt-get -y install docker.io
      sudo apt-get -y install docker.io
      docker ps -a
      sudo docker ps -a
      docker images
      sudo docker images
      sudo docker pull ubuntu:14.04
      docker run -i -t ubuntu /bin/bash
      sudo docker run -i -t ubuntu /bin/bash
      sudo docker ps -a
      docker attach 6f
list containers and attach containers
sudo docker ps
sudo docker ps -a
sudo docker start ‘first few characters of container listed by using docker ps -a command ….suppose you have containers starts with 6ss you can givr command sudo docker start 6/6s/6ss’
sudo docker attach 6
ls /var/lib/docker/
    2  ls /var/lib/docker/aufs/mnt/6f4d668753e4170c76ac403a7def9cc32cb870b8f12111dc05db4896ea445aa8/
    3  ls /var/lib/docker/aufs/mnt/6f4d668753e4170c76ac403a7def9cc32cb870b8f12111dc05db4896ea445aa8-init/
Mount volume to put the current bas os’s file sin to docker
sudo docker run -v /home/anandprakash:/mnt -t centos:6 ls /mnt   – mount the any location of base os into continaer and fired command like ls
sudo docker run -v /home/anandprakash:/mnt -t centos:6 sh /mnt/b.sh – put the script b.sh to continaer by giving mounted volume of the location to continaer
docker image build
cat Dockerfile
FROM ubuntu:latest
MAINTAINER Anandprakash Tandale <anand.prqakash@izeltech.com>
RUN apt-get update
RUN apt-get install -y python python-pip
RUN pip install Flask
RUN touch /var/test.txt
sudo docker build -t “python_pip:dockerfile” .
sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED              VIRTUAL SIZE
python_pip          dockerfile          54cdc518afcb        About a minute ago   433.5 MB
<none>              <none>              318c589f7931        10 minutes ago       187.9 MB
centos              6                   a0f8f0620fa4        3 days ago           194.6 MB
centos              latest              2a332da70fd1        3 days ago           196.7 MB
ubuntu              latest              594b6e305389        9 days ago           122 MB
ubuntu              14.04               0e7d4a488bcc        9 days ago           187.9 MB
hello-world         latest              70caadc460d7        5 weeks ago          967 B
Docker run bulleted image-
docker run -i -t 700c0c0fb96d bash – where 700c0 is images id
docker cp
copy from docker to base os
 docker cp 2f47b5c2a366:/var/log/nginx/access.log .
docker create snapshot and use as a image
sudo docker run -i -t ubuntu:14.04 /bin/sh
Docker import/export
docker export sharp_shockley | gzip > sharp_shockley.tar.gz

 

 

ELK (Elastic search ,LogStash ,Kibana)Stack installation script.

Hi Guyes,

I am providing you a script to install single node ELK stack.

Hope you will find it useful.

NOTE- Script will run on debian/ubuntu.

Please find the script below.

 

sudo apt-get update
sudo add-apt-repository -y ppa:webupd8team/java
echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections
echo debconf shared/accepted-oracle-license-v1-1 seen true | sudo debconf-set-selections
sudo apt-get update
sudo apt-get -y install oracle-java8-installer

############ INSTALL ELASTICSEARCH ##############
wget -qO – https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –
echo “deb http://packages.elastic.co/elasticsearch/2.x/debian stable main” | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list
sudo apt-get update
sudo apt-get -y install elasticsearch
sudo sed -i ‘s/# network.host: 192.168.0.1/network.host: localhost/g’ /etc/elasticsearch/elasticsearch.yml
sudo service elasticsearch restart
sudo update-rc.d elasticsearch defaults 95 10

############### INSTALL KIBANA ####################
echo “deb http://packages.elastic.co/kibana/4.4/debian stable main” | sudo tee -a /etc/apt/sources.list.d/kibana-4.4.x.list
sudo apt-get update
sudo apt-get -y install kibana
sudo sed -i ‘s/# server.host: “0.0.0.0”/server.host: “localhost”/g’ /opt/kibana/config/kibana.yml
sudo update-rc.d kibana defaults 96 9
sudo service kibana start

################## INSTALL NGINX #######################
sudo apt-get -y install nginx apache2-utils
sudo mv /etc/nginx/sites-available/default /etc/nginx/sites-available/default.bk
cat >>/etc/nginx/sites-available/default <<EOF
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection ‘upgrade’;
proxy_set_header Host \$host;
proxy_cache_bypass \$http_upgrade;
}
}
EOF

service nginx restart
sudo apt-get update
################################## INSTALL LOGSTASH ####################
echo ‘deb http://packages.elastic.co/logstash/2.2/debian stable main’ | sudo tee /etc/apt/sources.list.d/logstash-2.2.x.list
sudo apt-get update
sudo apt-get -y install logstash

# ##################create certificate to be used by filebeat for forwarding logs to logstash######################
sudo mkdir -p /etc/pki/tls/certs
sudo mkdir /etc/pki/tls/private
ELK_server_private_IP=$(ifconfig eth0 | grep “inet addr:” | cut -d ‘:’ -f2 | cut -d ‘ ‘ -f1)
sed -i “s/v3_ca ]/v3_ca ]\nsubjectAltName = IP: $ELK_server_private_IP/” /etc/ssl/openssl.cnf
cd /etc/pki/tls
sudo openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

################## create logstash input######################
cat >>/etc/logstash/conf.d/02-beats-input.conf<<EOF
input {
beats {
port => 5044
ssl => true
ssl_certificate => “/etc/pki/tls/certs/logstash-forwarder.crt”
ssl_key => “/etc/pki/tls/private/logstash-forwarder.key”
}
}
EOF

####################create logstash filter#######################
cat >> /etc/logstash/conf.d/10-syslog-filter.conf<<EOF
filter {
if [type] == “syslog” {
grok {
match => { “message” => “%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}” }
add_field => [ “received_at”, “%{@timestamp}” ]
add_field => [ “received_from”, “%{host}” ]
}
syslog_pri { }
date {
match => [ “syslog_timestamp”, “MMM d HH:mm:ss”, “MMM dd HH:mm:ss” ]
}
}
}
EOF

################ create logstash output###############################
cat>> /etc/logstash/conf.d/30-elasticsearch-output.conf<<EOF
output {
elasticsearch {
hosts => [“localhost:9200”]
sniffing => true
manage_template => false
index => “%{[@metadata][beat]}-%{+YYYY.MM.dd}”
document_type => “%{[@metadata][type]}”
}
}
EOF

sudo service logstash configtest
sudo service logstash restart
sudo update-rc.d logstash defaults 96 9

###################### install filebeat dashboard########################
cd /tmp
curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.1.0.zip
sudo apt-get -y install unzip
unzip beats-dashboards-*.zip
cd beats-dashboards-*
./load.sh

############################# install filebeat template###################
cd /tmp/
curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json
curl -XPUT ‘http://localhost:9200/_template/filebeat?pretty&#8217; -d@filebeat-index-template.json

########TODO remove filebeat installation and configuration as this will be done on the server from which we want to forward the logs to this server######
######## install filebeat on the same server for testing##############
echo “deb https://packages.elastic.co/beats/apt stable main” | sudo tee -a /etc/apt/sources.list.d/beats.list
wget -qO – https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –
sudo apt-get update
sudo apt-get -y install filebeat
sed -i ‘s/#document_type: log/document_type: syslog/g’ /etc/filebeat/filebeat.yml
sed -i ‘s/#logstash/logstash/g’ /etc/filebeat/filebeat.yml
sed -i “s/#hosts: \[\”localhost:5044\”\]/hosts: [\”$ELK_server_private_IP:5044\”]\n bulk_max_size: 1024/g” /etc/filebeat/filebeat.yml
sed -i ‘s/#tls:/tls:/g’ /etc/filebeat/filebeat.yml
sed -i ‘s!#certificate_authorities: \[“/etc/pki/root/ca.pem”\]!certificate_authorities: [“/etc/pki/tls/certs/logstash-forwarder.crt”]!g’ /etc/filebeat/filebeat.yml
sudo service filebeat restart
sudo update-rc.d filebeat defaults 95 10

How check the ELK stack is installed properly or not.-

In a web browser, go to the FQDN or public IP address of your ELK Server. After entering  you should see a page prompting you to configure a default index pattern.

 

 

 

Good source of knowledge and problem solver for software engineers.