I am glad that you are here! I started working on bioinformatics a few years ago (recently switched to cloud computing), and was amazed by those single-word bash commands which are much faster than my dull scripts, time saved through learning command-line shortcuts and scripting. Not all the code here is oneliner, but i put effort on making them brief and swift. I am mainly using Ubuntu, RedHat and Linux Mint and CentOS, sorry if the commands don't work on your system.
This blog will focus on simple bash commands for parsing data and Linux system maintenance that i acquired from work and LPIC exam. I apologize that there are no detailed citation for all the commands, but they are probably from dear Google and Stackoverflow.
English and bash are not my first language, please correct me anytime, thank you.
If you know other cool commands, please teach me!
Here's a more stylish version of Bash-Oneliner~
- Terminal Tricks
- Variable
- Grep
- Sed
- Awk
- Xargs
- Find
- Condition and Loop
- Math
- Time
- Download
- Random
- Xwindow
- System
- Hardware
- Networking
- Others
Ctrl + n : same as Down arrow.
Ctrl + p : same as Up arrow.
Ctrl + r : begins a backward search through command history.(keep pressing Ctrl + r to move backward)
Ctrl + s : to stop output to terminal.
Ctrl + q : to resume output to terminal after Ctrl + s.
Ctrl + a : move to the beginning of line.
Ctrl + e : move to the end of line.
Ctrl + d : if you've type something, Ctrl + d deletes the character under the cursor, else, it escapes the current shell.
Ctrl + k : delete all text from the cursor to the end of line.
Ctrl + x + backspace : delete all text from the beginning of line to the cursor.
Ctrl + t : transpose the character before the cursor with the one under the cursor, press Esc + t to transposes the two words before the cursor.
Ctrl + w : cut the word before the cursor; then Ctrl + y paste it
Ctrl + u : cut the line before the cursor; then Ctrl + y paste it
Ctrl + x + Ctrl + e : launch editor define by $EDITOR
Ctrl + _ : undo typing.
Esc + u
# converts text from cursor to the end of the word to uppercase.
Esc + l
# converts text from cursor to the end of the word to lowercase.
Esc + c
# converts letter under the cursor to uppercase.
!53
!!
Run last command and change some parameter using caret substitution (e.g. last command: echo 'aaa' -> rerun as: echo 'bbb')
#last command: echo 'aaa'
^aaa^bbb
#echo 'bbb'
#bbb
#Notice that only the first aaa will be replaced, if you want to replace all 'aaa', use ':&' to repeat it:
^aaa^bbb^:&
#or
!!:gs/aaa/bbb/
!cat
# or
!c
# run cat filename again
$0 :name of shell or shell script.
$1, $2, $3, ... :positional parameters.
$# :number of positional parameters.
$? :most recent foreground pipeline exit status.
$- :current options set for the shell.
$$ :pid of the current shell (not subshell).
$! :is the PID of the most recent background command.
grep = grep -G # Basic Regular Expression (BRE)
fgrep = grep -F # fixed text, ignoring meta-charachetrs
egrep = grep -E # Extended Regular Expression (ERE)
pgrep = grep -P # Perl Compatible Regular Expressions (PCRE)
rgrep = grep -r # recursive
grep -c "^$"
grep -o '[0-9]*'
#or
grep -oP '\d'
grep ‘[0-9]\{3\}’
# or
grep -E ‘[0-9]{3}’
# or
grep -P ‘\d{3}’
grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'
# or
grep -Po '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}'
grep -w 'target'
#or using RE
grep '\btarget\b'
# return also 3 lines after match
grep -A 3 'bbo'
# return also 3 lines before match
grep -B 3 'bbo'
# return also 3 lines before and after match
grep -C 3 'bbo'
grep -o 'S.*'
grep -o -P '(?<=w1).*(?=w2)'
grep -v bbo filename
grep -v '^#' file.txt
grep "$boo" filename
#remember to quote the variable!
grep -m 1 bbo filename
grep -c bbo filename
grep -o bbo filename |wc -l
grep -i "bbo" filename
grep --color bbo filename
grep -R bbo /path/to/directory
# or
grep -r bbo /path/to/directory
grep -rh bbo /path/to/directory
grep -rl bbo /path/to/directory
grep 'A\|B\|C\|D'
grep 'A.*B'
grep 'A.B'
grep ‘colou?r’
grep -f fileA fileB
grep $'\t'
$echo "$long_str"|grep -q "$short_str"
if [ $? -eq 0 ]; then echo 'found'; fi
#grep -q will output 0 if match found
#remember to add space between []!
grep -oP '\(\K[^\)]+'
grep -o -w "\w\{10\}\-R\w\{1\}"
# \w word character [0-9a-zA-Z_] \W not word character
grep -d skip 'bbo' /path/to/files/*
sed 1d filename
sed 1,100d filename
sed "/bbo/d" filename
- case insensitive:
sed "/bbo/Id" filename
sed -E '/^.{5}[^2]/d'
#aaaa2aaa (you can stay)
#aaaa1aaa (delete!)
sed -i "/bbo/d" filename
# e.g. add >$i to the first line (to make a bioinformatics FASTA file)
sed "1i >$i"
# notice the double quotes! in other examples, you can use a single quote, but here, no way!
# '1i' means insert to first line
# Use backslash for end-of-line $ pattern, and double quotes for expressing the variable
sed -e "\$s/\$/\n+--$3-----+/"
sed '/^\s*$/d'
# or
sed '/^$/d'
sed '$d'
sed -i '$ s/.$//' filename
sed -i '1s/^/[/' file
sed -e '1isomething -e '3isomething'
sed '$s/$/]/' filename
sed '$a\'
sed -e 's/^/bbo/' file
sed -e 's/$/\}\]/' filename
sed 's/.\{4\}/&\n/g'
sed -s '$a,' *.json > all.json
sed 's/A/B/g' filename
sed "s/aaa=.*/aaa=\/my\/new\/path/g"
sed -n '/^@S/p'
sed '/bbo/d' filename
sed -n 500,5000p filename
sed -n '0~3p' filename
# catch 0: start; 3: step
sed -n '1~2p'
sed -n '1p;0~3p'
sed -e 's/^[ \t]*//'
# Notice a whitespace before '\t'!!
sed 's/ *//'
# notice a whitespace before '*'!!
sed 's/,$//g'
sed "s/$/\t$i/"
# $i is the valuable you want to add
# To add the filename to every last column of the file
for i in $(ls);do sed -i "s/$/\t$i/" $i;done
for i in T000086_1.02.n T000086_1.02.p;do sed "s/$/\t${i/*./}/" $i;done >T000086_1.02.np
sed ':a;N;$!ba;s/\n//g'
sed -n -e '123p'
sed -n '10,33p' <filename
sed 's=/=\\/=g'
sed 's/A-.*-e//g' filename
sed '$ s/.$//'
sed -r -e 's/^.{3}/&#/' file
awk -F $'\t'
awk -v OFS='\t'
a=bbo;b=obb;
awk -v a="$a" -v b="$b" "$1==a && $10=b" filename
awk '{print NR,length($0);}' filename
awk '{print NF}'
awk '{print $2, $1}'
awk '$1~/,/ {print}'
awk '{split($2, a,",");for (i in a) print $1"\t"a[i]}' filename
awk -v N=7 '{print}/bbo/&& --N<=0 {exit}'
ls|xargs -n1 -I file awk '{s=$0};END{print FILENAME,s}' file
awk 'BEGIN{OFS="\t"}$3="chr"$3'
awk '!/bbo/' file
awk 'NF{NF-=1};1' file
# For example there are two files:
# fileA:
# a
# b
# c
# fileB:
# d
# e
awk 'print FILENAME, NR,FNR,$0}' fileA fileB
# fileA 1 1 a
# fileA 2 2 b
# fileA 3 3 c
# fileB 4 1 d
# fileB 5 2 e
# For example there are two files:
# fileA:
# 1 0
# 2 1
# 3 1
# 4 0
# fileB:
# 1 0
# 2 1
# 3 0
# 4 1
awk -v OFS='\t' 'NR=FNR{a[$1]=$2;next} NF {print $1,((a[$1]=$2)? $2:"0")}' fileA fileB
# 1 0
# 2 1
# 3 0
# 4 0
awk '{while (match($0, /[0-9]+\[0-9]+/)){
\printf "%s%.2f", substr($0,0,RSTART-1),substr($0,RSTART,RLENGTH)
\$0=substr($0, RSTART+RLENGTH)
\}
\print
\}'
awk '{printf("%s\t%s\n",NR,$0)}'
# For example, seperate the following content:
# David cat,dog
# into
# David cat
# David dog
awk '{split($2,a,",");for(i in a)print $1"\t"a[i]}' file
# Detail here: http://stackoverflow.com/questions/33408762/bash-turning-single-comma-separated-column-into-multi-line-string
awk '{s+=$1}END{print s/NR}'
awk '$1 ~ /^Linux/'
awk ' {split( $0, a, "\t" ); asort( a ); for( i = 1; i <= length(a); i++ ) printf( "%s\t", a[i] ); printf( "\n" ); }'
awk '{$6 = $4 - prev5; prev5 = $5; print;}'
xargs -d\t
echo 1 2 3 4 5 6| xargs -n 3
# 1 2 3
# 4 5 6
echo a b c |xargs -p -n 3
xargs -t abcd
# bin/echo abcd
# abcd
find . -name "*.html"|xargs rm
# when using a backtick
rm `find . -name "*.html"`
find . -name "*.c" -print0|xargs -0 rm -rf
xargs --show-limits
find . -name "*.bak" -print 0|xargs -0 -I {} mv {} ~/old
# or
find . -name "*.bak" -print 0|xargs -0 -I file mv file ~/old
ls |head -100|xargs -I {} mv {} d1
time echo {1..5} |xargs -n 1 -P 5 sleep
# a lot faster than:
time echo {1..5} |xargs -n1 sleep
find /dir/to/A -type f -name "*.py" -print 0| xargs -0 -r -I file cp -v -p file --target-directory=/path/to/B
# v: verbose|
# p: keep detail (e.g. owner)
ls |xargs -n1 -I file sed -i '/^Pos/d' filename
ls |sed 's/.txt//g'|xargs -n1 -I file sed -i -e '1 i\>file\' file.txt
ls |xargs -n1 wc -l
ls -l| xargs
echo mso{1..8}|xargs -n1 bash -c 'echo -n "$1:"; ls -la "$1"| grep -w 74 |wc -l' --
# "--" signals the end of options and display further option processing
cat requirements.txt| xargs -n1 sudo pip install
ls|xargs wc -l
cat grep_list |xargs -I{} grep {} filename
grep -rl '192.168.1.111' /etc | xargs sed -i 's/192.168.1.111/192.168.2.111/g'
find .
find . -type f
find . -type d
find . -name '*.php' -exec sed -i 's/www/w/g' {} \;
# if there are no subdirectory
replace "www" "w" -- *
# a space before *
find mso*/ -name M* -printf "%f\n"
find . -name "*.mso" -size -74c -delete
# M for MB, etc
# if and else loop for string matching
if [[ "$c" == "read" ]]; then outputdir="seq"; else outputdir="write" ; fi
# Test if myfile contains the string 'test':
if grep -q hello myfile; then …
# Test if mydir is a directory, change to it and do other stuff:
if cd mydir; then
echo 'some content' >myfile
else
echo >&2 "Fatal error. This script requires mydir."
fi
# if variable is null
if [ ! -s "myvariable" ]
#True of the length if "STRING" is zero.
# Test if file exist
if [ -e 'filename' ]
then
echo -e "file exists!"
fi
# Test if file exist but also including symbolic links:
if [ -e myfile ] || [ -L myfile ]
then
echo -e "file exists!"
fi
# Test if the value of x is greater or equal than 5
if [ "$x" -ge 5 ]; then …
# Test if the value of x is greater or equal than 5, in bash/ksh/zsh:
if ((x >= 5)); then …
# Use (( )) for arithmetic operation
if ((j==u+2))
# Use [[ ]] for comparison
if [[ $age -gt 21 ]]
for i in $(ls); do echo file $i;done
#or
for i in *; do echo file $i; done
# Press any key to continue each loop
for i in $(cat tpc_stats_0925.log |grep failed|grep -o '\query\w\{1,2\}');do cat ${i}.log; read -rsp $'Press any key to continue...\n' -n1 key;done
# Print a file line by line when a key is pressed,
oifs="$IFS"; IFS=$'\n'; for line in $(cat myfile); do ...; done
while read -r line; do ...; done <myfile
#If only one word a line, simply
for line in $(cat myfile); do echo $line; read -n1; done
#Loop through an array
for i in "${arrayName[@]}"; do echo $i;done
# Column subtraction of a file (e.g. a 3 columns file)
while read a b c; do echo $(($c-$b));done < <(head filename)
#there is a space between the two '<'s
# Sum up column subtraction
i=0; while read a b c; do ((i+=$c-$b)); echo $i; done < <(head filename)
# Keep checking a running process (e.g. perl) and start another new process (e.g. python) immediately after it. (BETTER use the wait command! Ctrl+F 'wait')
while [[ $(pidof perl) ]];do echo f;sleep 10;done && python timetorunpython.py
read type;
case $type in
'0')
echo 'how'
;;
'1')
echo 'are'
;;
'2')
echo 'you'
;;
esac
# foo=bar
echo "'$foo'"
#'bar'
# double/single quotes around single quotes make the inner single quotes expand variables
var="some string"
echo ${#var}
# 11
var=string
echo "${var:0:1}"
#s
# or
echo ${var%%"${var#?}"}
var="some string"
echo ${var:2}
#me string
var="0050"
echo ${var[@]#0}
#050
{var/a/,}
{var//a/,}
#with grep
test="god the father"
grep ${test// /\\\|} file.txt
# turning the space into 'or' (\|) in grep
var=HelloWorld
echo ${var,,}
helloworld
echo $(( 10 + 5 )) #15
x=1
echo $(( x++ )) #1 , notice that it is still 1, since it's post-incremen
echo $(( x++ )) #2
echo $(( ++x )) #4 , notice that it is not 3 since it's pre-incremen
echo $(( x-- )) #4
echo $(( x-- )) #3
echo $(( --x )) #1
x=2
y=3
echo $(( x ** y )) #8
factor 50
seq 10|paste -sd+|bc
awk '{s+=$1} END {print s}' filename
cat file| awk -F '\t' 'BEGIN {SUM=0}{SUM+=$3-$2}END{print SUM}'
expr 10+20 #30
expr 10\*20 #600
expr 30 \> 20 #1 (true)
# Number of decimal digit/ significant figure
echo "scale=2;2/3" | bc
#.66
# Exponent operator
echo "10^2" | bc
#100
# Using variables
echo "var=5;--var"| bc
#4
time echo hi
sleep 10
TMOUT=10
#once you set this variable, logout timer start running!
#This will run the command 'sleep 10' for only 1 second.
timeout 1 sleep 10
at now + 1min #time-units can be minutes, hours, days, or weeks
warning: commands will be executed using /bin/sh
at> echo hihigithub >~/itworks
at> <EOT> # press Ctrl + D to exit
job 1 at Wed Apr 18 11:16:00 2018
curl https://raw.githubusercontent.com/onceupon/Bash-Oneliner/master/README.md | pandoc -f markdown -t man | man -l -
# or w3m (a text based web browser and pager)
curl https://raw.githubusercontent.com/onceupon/Bash-Oneliner/master/README.md | pandoc | w3m -T text/html
# or using emacs (in emac text editor)
emacs --eval '(org-mode)' --insert <(curl https://raw.githubusercontent.com/onceupon/Bash-Oneliner/master/README.md | pandoc -t org)
# or using emacs (on terminal, exit using Ctrl + x then Ctrl + c)
emacs -nw --eval '(org-mode)' --insert <(curl https://raw.githubusercontent.com/onceupon/Bash-Oneliner/master/README.md | pandoc -t org)
wget -r -l1 -H -t1 -nd -N -np -A mp3 -e robots=off http://example.com
# -r: recursive and download all links on page
# -l1: only one level link
# -H: span host, visit other hosts
# -t1: numbers of retries
# -nd: don't make new directories, download to here
# -N: turn on timestamp
# -nd: no parent
# -A: type (separate by ,)
# -e robots=off: ignore the robots.txt file which stop wget from crashing the site, sorry example.com
Upload a file to web and download (https://transfer.sh/)
# Upload a file (e.g. filename.txt):
curl --upload-file ./filename.txt https://transfer.sh/filename.txt
# the above command will return a URL, e.g: https://transfer.sh/tG8rM/filename.txt
# Next you can download it by:
curl https://transfer.sh/tG8rM/filename.txt -o filename.txt
data=file.txt
url=http://www.example.com/$data
if [ ! -s $data ];then
echo "downloading test data..."
wget $url
fi
wget -O filename "http://example.com"
wget -P /path/to/directory "http://example.com"
shuf -n 100 filename
for i in a b c d e; do echo $i; done| shuf
Echo series of random numbers between a range (e.g. shuffle numbers from 0-100, then pick 15 of them randomly)
shuf -i 0-100 -n 15
echo $RANDOM
echo $((RANDOM % 10))
echo $(((RANDOM %10)+1))
X11 GUI applications! Here are some GUI tools for you if you get bored by the text-only environment.
ssh -X user_name@ip_address
# or setting through xhost
# --> Install the following for Centos:
# xorg-x11-xauth
# xorg-x11-fonts-*
# xorg-x11-utils
xclock
xeyes
xcowsay
1. ssh -X user_name@ip_address
2. apt-get install eog
3. eog picture.png
1. ssh -X user_name@ip_address
2. sudo apt install mpv
3. mpv myvideo.mp4
1. ssh -X user_name@ip_address
2. apt-get install gedit
3. gedit filename.txt
1. ssh -X user_name@ip_address
2. apt-get install evince
3. evince filename.pdf
1. ssh -X user_name@ip_address
2. apt-get install libxss1 libappindicator1 libindicator7
3. wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
4. sudo apt-get install -f
5. dpkg -i google-chrome*.deb
6. google-chrome
if [ "$EUID" -ne 0 ]; then
echo "Please run this as root"
exit 1
fi
ps
$ip add show
# or
ifconfig
cat /etc/*-release
man hier
jobs -l
export PATH=$PATH:~/path/you/want
chmod +x filename
# you can now ./filename to execute it
uname -i
links www.google.com
useradd username
passwd username
1. joe ~/.bash_profile
2. export PS1='\u@\h:\w\$'
# $PS1 is a variable that defines the makeup and style of the command prompt
3. source ~/.bash_profile
1. joe ~/.bash_profile
2. alias pd="pwd" //no more need to type that 'w'!
3. source ~/.bash_profile
$echo $PATH
# list of directories separated by a colon
$env
unset MYVAR
lsblk
ln -s /path/to/program /home/usr/bin
# must be the whole path to the program
hexdump -C filename.class
rsh node_name
netstat -tulpn
readlink filename
type python
# python is /usr/bin/python
# There are 5 different types, check using the 'type -f' flag
# 1. alias (shell alias)
# 2. function (shell function, type will also print the function body)
# 3. builtin (shell builtin)
# 4. file (disk file)
# 5. keyword (shell reserved word)
# You can also use `which`
which python
# /usr/bin/python
declare -F
du -hs .
# or
du -sb
cp -rp /path/to/directory
pushd .
# then pop
popd
#or use dirs to display the list of currently remembered directories.
dirs -l
df -h
# or
du -h
#or
du -sk /var/log/* |sort -rn |head -10
runlevel
init 3
#or
telinit 3
1. edit /etc/init/rc-sysinit.conf
2. env DEFAULT_RUNLEVEL=2
su
su somebody
repquota -auvs
getent database_name
# (e.g. the 'passwd' database)
getent passwd
# list all user account (all local and LDAP)
# (e.g. fetch list of grop accounts)
getent group
# store in database 'group'
chown user_name filename
chown -R user_name /path/to/directory/
# chown user:group filename
df
cat /etc/passwd
getent passwd| awk '{FS="[:]"; print $1}'
compgen -u
compgen -g
group username
id username
if [ $(id -u) -ne 0 ];then
echo "You are not root!"
exit;
fi
# 'id -u' output 0 if it's not root
more /proc/cpuinfo
# or
lscpu
setquota username 120586240 125829120 0 0 /home
quota -v username
# :(){:|:&};:
lastlog
joe /etc/environment
# edit this file
ps aux
cat /proc/sys/kernal/pid_max
ulimit -u
nmap -sT -O localhost
#notice that some companies might not like you using nmap
nproc --all
1. top
2. press '1'
jobs -l
service --status-all
shutdown -r +5 "Server will restart in 5 minutes. Please save your work."
shutdown -c
wall -n hihi
pkill -U user_name
kill -9 $(ps aux | grep 'program_name' | awk '{print $2}')
# You might have to install the following:
apt-get install libglib2.0-bin;
# or
yum install dconf dconf-editor;
yum install dbus dbus-x11;
# Check list
gsettings list-recursively
# Change some settings
gsettings set org.gnome.gedit.preferences.editor highlight-current-line true
gsettings set org.gnome.gedit.preferences.editor scheme 'cobalt'
gsettings set org.gnome.gedit.preferences.editor use-default-font false
gsettings set org.gnome.gedit.preferences.editor editor-font 'Cantarell Regular 12'
Add user to a group (e.g add user 'nice' to the group 'docker', so that he can run docker without sudo)
sudo gpasswd -a nice docker
1. pip install --user package_name
2. You might need to export ~/.local/bin/ to PATH: export PATH=$PATH:~/.local/bin/
1. uname -a #check current kernel, which should NOT be removed
2. sudo apt-get purge linux-image-X.X.X-X-generic #replace old version
sudo hostname your-new-name
# if not working, do also:
hostnamectl set-hostname your-new-hostname
# then check with:
hostnamectl
# Or check /etc/hostname
# If still not working..., edit:
/etc/sysconfig/network
/etc/sysconfig/network-scripts/ifcfg-ensxxx
#add HOSTNAME="your-new-hostname"
apt list --installed
# or on Red Hat:
yum list installed
lsof /mnt/dir
killall pulseaudio
# then press Alt-F2 and type in pulseaudio
killall pulseaudio
lsscsi
http://onceuponmine.blogspot.tw/2017/08/set-up-your-own-dns-server.html
http://onceuponmine.blogspot.tw/2017/07/create-your-first-simple-daemon.html
http://onceuponmine.blogspot.tw/2017/10/setting-up-msmtprc-and-use-your-gmail.html
Using telnet to test open ports, test if you can connect to a port (e.g 53) of a server (e.g 192.168.2.106)
telnet 192.168.2.106 53
ifconfig eth0 mtu 9000
pidof python
# or
ps aux|grep python
# Start ntp:
ntpd
# Check ntp:
ntpq -p
sudo apt-get autoremove
sudo apt-get clean
sudo rm -rf ~/.cache/thumbnails/*
# Remove old kernal:
sudo dpkg --list 'linux-image*'
sudo apt-get remove linux-image-OLDER_VERSION
pvscan
lvextend -L +130G /dev/rhel/root -r
# Adding -r will grow filesystem after resizing the volume.
sudo dd if=~/path/to/isofile.iso of=/dev/sdc1 oflag=direct bs=1048576
sudo dpkg -l | grep <package_name>
sudo dpkg --purge <package_name>
ssh -f -L 9000:targetservername:8088 [email protected] -N
#-f: run in background; -L: Listen; -N: do nothing
#the 9000 of your computer is now connected to the 8088 port of the targetservername through 192.168.14.72
#so that you can see the content of targetservername:8088 by entering localhost:9000 from your browser.
#pidof
pidof sublime_text
#pgrep, you don't have to type the whole program name
pgrep sublim
#top, takes longer time
top|grep sublime_text
aio-stress - AIO benchmark.
bandwidth - memory bandwidth benchmark.
bonnie++ - hard drive and file system performance benchmark.
dbench - generate I/O workloads to either a filesystem or to a networked CIFS or NFS server.
dnsperf - authorative and recursing DNS servers.
filebench - model based file system workload generator.
fio - I/O benchmark.
fs_mark - synchronous/async file creation benchmark.
httperf - measure web server performance.
interbench - linux interactivity benchmark.
ioblazer - multi-platform storage stack micro-benchmark.
iozone - filesystem benchmark.
iperf3 - measure TCP/UDP/SCTP performance.
kcbench - kernel compile benchmark, compiles a kernel and measures the time it takes.
lmbench - Suite of simple, portable benchmarks.
netperf - measure network performance, test unidirectional throughput, and end-to-end latency.
netpipe - network protocol independent performance evaluator.
nfsometer - NFS performance framework.
nuttcp - measure network performance.
phoronix-test-suite - comprehensive automated testing and benchmarking platform.
seeker - portable disk seek benchmark.
siege - http load tester and benchmark.
sockperf - network benchmarking utility over socket API.
spew - measures I/O performance and/or generates I/O load.
stress - workload generator for POSIX systems.
sysbench - scriptable database and system performance benchmark.
tiobench - threaded IO benchmark.
unixbench - the original BYTE UNIX benchmark suite, provide a basic indicator of the performance of a Unix-like system.
wrk - HTTP benchmark.
lastb
who
w
users
tail -f --pid=<PID> filename.txt
# replace <PID> with the process ID of the program.
systemctl list-unit-files|grep enabled
lshw -json >report.json
# Other options are: [ -html ] [ -short ] [ -xml ] [ -json ] [ -businfo ] [ -sanitize ] ,etc
sudo dmidecode -t memory
dmidecode -t 4
# Type Information
# 0 BIOS
# 1 System
# 2 Base Board
# 3 Chassis
# 4 Processor
# 5 Memory Controller
# 6 Memory Module
# 7 Cache
# 8 Port Connector
# 9 System Slots
# 11 OEM Strings
# 13 BIOS Language
# 15 System Event Log
# 16 Physical Memory Array
# 17 Memory Device
# 18 32-bit Memory Error
# 19 Memory Array Mapped Address
# 20 Memory Device Mapped Address
# 21 Built-in Pointing Device
# 22 Portable Battery
# 23 System Reset
# 24 Hardware Security
# 25 System Power Controls
# 26 Voltage Probe
# 27 Cooling Device
# 28 Temperature Probe
# 29 Electrical Current Probe
# 30 Out-of-band Remote Access
# 31 Boot Integrity Services
# 32 System Boot
# 34 Management Device
# 35 Management Device Component
# 36 Management Device Threshold Data
# 37 Memory Channel
# 38 IPMI Device
# 39 Power Supply
lsscsi|grep SEAGATE|wc -l
# or
sg_map -i -x|grep SEAGATE|wc -l
blkid /dev/sdb
lsblk -io KNAME,TYPE,MODEL,VENDOR,SIZE,ROTA
#where ROTA means rotational device / spinning hard disks (1 if true, 0 if false)
lspci | egrep -i --color 'network|ethernet'
# Remotely finding out power status of the server
ipmitool -U <bmc_username> -P <bmc_password> -I lanplus -H <bmc_ip_address> power status
# Remotely switching on server
ipmitool -U <bmc_username> -P <bmc_password> -I lanplus -H <bmc_ip_address> power on
# Turn on panel identify light (default 15s)
ipmitool chassis identify 255
# Found out server sensor temperature
ipmitool sensors |grep -i Temp
# Reset BMC
ipmitool bmc reset cold
# Prnt BMC network
ipmitool lan print 1
# Setting BMC network
ipmitool -I bmc lan set 1 ipaddr 192.168.0.55
ipmitool -I bmc lan set 1 netmask 255.255.255.0
ipmitool -I bmc lan set 1 defgw ipaddr 192.168.0.1
ip a
ip r
Display ARP cache (ARP cache displays the MAC addresses of device in the same network that you have connected to)
ip n
ip address add 192.168.140.3/24 dev eno16777736
sudo vi /etc/sysconfig/network-scripts/ifcfg-enoxxx
# then edit the fields: BOOTPROT, DEVICE, IPADDR, NETMASK, GATEWAY, DNS1 etc
sudo nmcli c reload
sudo systemctl restart network.service
hostnamectl
hostnamectl set-hostname "mynode"
dirname `pwd`
tee <fileA fileB fileC fileD >/dev/null
tr --delete '\n' <input.txt >output.txt
tr '\n' ' ' <filename
tr /a-z/ /A-Z/
echo 'something' |tr a-z a
# aaaaaaaaa
diff fileA fileB
# a: added; d:delete; c:changed
# or
sdiff fileA fileB
# side-to-side merge of file differences
diff fileA fileB --strip-trailing-cr
nl fileA
#or
nl -nrz fileA
# add leading zeros
#or
nl -w1 -s ' '
# making it simple, blank separate
Join two files field by field with tab (default join by the first column of both file, and default separator is space)
# fileA and fileB should have the same ordering of lines.
join -t '\t' fileA fileB
# Join using specified field (e.g. column 3 of fileA and column 5 of fileB)
join -1 3 -2 5 fileA fileB
paste fileA fileB fileC
# default tab separate
echo 12345| rev
zmore filename
# or
zless filename
some_commands &>log &
# or
some_commands 2>log &
# or
some_commands 2>&1| tee logfile
# or
some_commands |& tee logfile
# or
some_commands 2>&1 >>outfile
#0: standard input; 1: standard output; 2: standard error
echo 'heres the content'| mail -a /path/to/attach_file.txt -s 'mail.subject' [email protected]
# use -a flag to set send from (-a "From: [email protected]")
xls2csv filename
echo 'hihi' >>filename
speaker-test -t sine -f 1000 -l1
(speaker-test -t sine -f 1000) & pid=$!;sleep 0.1s;kill -9 $pid
~/.bash_history
#or
history -d [line_number]
head !$
clear
# or
Ctrl+l
cat /directory/to/file
echo 100>!$
unxz filename.tar.xz
# then
tar -xf filename.tar
pip install packagename
Ctrl+U
# or
Ctrl+C
# or
Alt+Shift+#
# to make it to history
# addmetodistory
# just add a "#" before~~
sleep 5;echo hi
rsync -av filename filename.bak
rsync -av directory directory.bak
rsync -av --ignore_existing directory/ directory.bak
rsync -av --update directory directory.bak
rsync -av directory user@ip_address:/path/to/directory.bak
# skip files that are newer on receiver (i prefer this one!)
mkdir -p project/{lib/ext,bin,src,doc/{html,info,pdf},demo/stat}
# -p: make parent directory
# this will create project/doc/html/; project/doc/info; project/lib/ext ,etc
cd tmp/ && tar xvf ~/a.tar
cd tmp/a/b/c ||mkdir -p tmp/a/b/c
tar xvf -C /path/to/directory filename.gz
cd tmp/a/b/c \
> || \
>mkdir -p tmp/a/b/c
VAR=$PWD; cd ~; tar xvf -C $VAR file.tar
# PWD need to be capital letter
file /tmp/
# tmp/: directory
#!/bin/bash
file=${1#*.}
# remove string before a "."
python -m SimpleHTTPServer
read input
echo $input
seq 10
i=`wc -l filename|cut -d ' ' -f1`; cat filename| echo "scale=2;(`paste -sd+`)/"$i|bc
echo {1,2}{1,2}
# 1 1, 1 2, 2 1, 2 2
set = {A,T,C,G}
group= 5
for ((i=0; i<$group; i++));do
repetition=$set$repetition;done
bash -c "echo "$repetition""
foo=$(<test1)
echo ${#foo}
echo -e ' \t '
declare -a array=()
# or
declare array=()
# or associative array
declare -A array=()
scp -r directoryname user@ip:/path/to/send
# Split by line (e.g. 1000 lines/smallfile)
split -d -l 1000 largefile.txt
# Split by byte without breaking lines across files
split -C 10 largefile.txt
#1. Create a big file
dd if=/dev/zero of=bigfile bs=1 count=1000000
#2. Split the big file to 100000 10-bytes files
split -b 10 -a 10 bigfile
rename 's/ABC//' *.gz
basename filename.gz .gz
zcat filename.gz> $(basename filename.gz .gz).unpacked
rename s/$/.txt/ *
# You can use rename -n s/$/.txt/ * to check the result first, it will only print sth like this:
# rename(a, a.txt)
# rename(b, b.txt)
# rename(c, c.txt)
tr -s "/t" < filename
echo -e 'text here \c'
!$
echo $?
head -c 50 file
# e.g.
# AAAA
# BBBB
# CCCC
# DDDD
cat filename|paste - -
# AAAABBBB
# CCCCDDDD
cat filename|paste - - - -
# AAAABBBBCCCCDDDD
cat file.fastq | paste - - - - | sed 's/^@/>/g'| cut -f1-2 | tr '\t' '\n' >file.fa
cat file|rev | cut -d/ -f1 | rev
((var++))
# or
var=$((var+1))
>filename
tar xvfj file.tar.bz2
unxz file.tar.xz
tar xopf file.tar
# 'y':
yes
# or 'n':
yes n
# or 'anything':
yes anything
# For example:
```bash
yes | rm -r large_directory
dd if=/dev/zero of=//dev/shm/200m bs=1024k count=200
# or
dd if=/dev/zero of=//dev/shm/200m bs=1M count=200
# Standard output:
# 200+0 records in
# 200+0 records out
# 209715200 bytes (210 MB) copied, 0.0955679 s, 2.2 GB/s
cat >myfile
let me add sth here
exit by control + c
^C
watch -n 1 wc -l filename
set -x; echo `expr 10 + 20 `
fortune
htop
read -rsp $'Press any key to continue...\n' -n1 key
# download:
# https://github.com/harelba/q
# example:
q -d "," "select c3,c4,c5 from /path/to/file.txt where c3='foo' and c5='boo'"
# Create session and attach:
screen
# Create detached session foo:
screen -S foo -d -m
# Detached session foo:
screen: ^a^d
# List sessions:
screen -ls
# Attach last session:
screen -r
# Attach to session foo:
screen -r foo
# Kill session foo:
screen -r foo -X quit
# Scroll:
Hit your screen prefix combination (C-a / control+A), then hit Escape.
Move up/down with the arrow keys (↑ and ↓).
# Redirect output of an already running process in Screen:
(C-a / control+A), then hit 'H'
# Store screen output for Screen:
Ctrl+A, Shift+H
# You will then find a screen.log file under current directory.
# Create session and attach:
tmux
# Attach to session foo:
tmux attach -t foo
# Detached session foo:
^bd
# List sessions:
tmux ls
# Attach last session:
tmux attach
# Kill session foo:
tmux kill-session -t foo
# Create detached session foo:
tmux new -s foo -d
# Send command to all panes in tmux:
Ctrl-B
:setw synchronize-panes
# Some tmux pane control commands:
Ctrl-B
# Panes (splits), Press Ctrl+B, then input the following symbol:
# % horizontal split
# " vertical split
# o swap panes
# q show pane numbers
# x kill pane
# space - toggle between layouts
# Distribute Vertically (rows):
select-layout even-vertical
# or
Ctrl+b, Alt+2
# Distribute horizontally (columns):
select-layout even-horizontal
# or
Ctrl+b, Alt+1
# Scroll
Ctrl-b then \[ then you can use your normal navigation keys to scroll around.
Press q to quit scroll mode.
cat filename|rev|cut -f1|rev
sshpass -p mypassword ssh [email protected] "df -h"
wait %1
# or
wait $PID
wait ${!}
#wait ${!} to wait till the last background process ($! is the PID of the last background process)
sudo apt-get install poppler-utils
pdftotext example.pdf example.txt
ls -ld -- */
script output.txt
# start using terminal
# to logout the screen session (stop saving the contents), type exit.
tree
# go to the directory you want to list, and type tree (sudo apt-get install tree)
# output:
# one/
# └── two
# ├── 1
# ├── 2
# ├── 3
# ├── 4
# └── 5
#
# 1. install virtualenv.
sudo apt-get install virtualenv
# 2. Create a directory (name it .venv or whatever name your want) for your new shiny isolated environment.
virtualenv .venv
# 3. source virtual bin
source .venv/bin/activate
# 4. you can check check if you are now inside a sandbox.
type pip
# 5. Now you can install your pip package, here requirements.txt is simply a txt file containing all the packages you want. (e.g tornado==4.5.3).
pip install -r requirements.txt
#install the useful jq package
#sudo apt-get install jq
#e.g. to get all the values of the 'url' key, simply pipe the json to the following jq command(you can use .[]. to select inner json, i.e jq '.[].url')
jq '.url'
history -w
vi ~/.bash_history
history -r
D2B=({0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1})
echo -e ${D2B[5]}
#00000101
echo -e ${D2B[255]}
#11111111
echo "00110010101110001101" | fold -w4
# 0011
# 0010
# 1011
# 1000
# 1101
sort -k3,3 -s
cat file.txt|rev|column -t|rev
echo 'hihihihi' | tee outputfile.txt
# use '-a' with tee to append to file.
cat -v filename
expand filename
unexpand filename
od filename
tac filename
while read a b; do yes $b |head -n $a ;done <test.txt
More coming!!