Earlier we covered details about Linux including RHCSA 8, LVM, Zenity, Routing Table and also a python menu program to automate various Linux operations. In this we will be covering some of the advanced concepts essential for the industry. What a Linux distribution is and which one to choose? This is the most common question for the ones starting. Because a single vendor does not manage Linux, comprehending and choosing the correct version might be challenging. There are several Linux variants. It’s all up to you. You can select either the most recent or the oldest version of Linux. If you can’t find what you’re looking for in a Linux version, you can always design your own, tailored to your individual needs. This is because Linux and most Linux software are free source. In my opinion, RedHat Linux can be an excellent choice for developers unless cybersecurity comes into the picture because other distributions like Parrot OS would be appropriate for such scenarios. For practicing/implementing, we can run RHEL instance over AWS, t2.medium ideally. To understand the concepts better, run and analyze the commands alongside. Whenever we log in, we get the terminal. With the terminal the system creates one Session. The command is a built-in tool that allows administrators to view information about users that are currently logged in. **$ w** We can login using the username and password any number of times. But, whenever we login a new Session is created. (or more specifically ). It is basically helps keeping track of users and sessions, their processes and their idle states. The service is deeply integrated with We’ll get into systemd later on. It tells us who has logged in and how many sessions are created. **logind** **systemd-logind** **logind** **systemd.** $ loginctl show-session <session_number> #Shows session details including scope$ loginctl user-status #Shows the details about the session Each session has a different scope. Linux -> Username -> Session. Note: There’s no session if the user is not logged in. Everything behind the scene is handled by the kernel. Here comes cgroup, one of the powerful facility or feature of Linux kernel. For same user we have multiple sessions and for each session we can configure the cgroup. With the help of scope we can define the scope for each session. By default we can create 8192 sessions for users. Cgroups enable us to assign resources such as CPU time, system memory, and network bandwidth, as well as combinations of these. We can keep track of the cgroups we set up, restrict them access to particular resources, and even dynamically modify them on a live system.System administrators may fine-tune the allocation, prioritisation, denying, controlling, and monitoring of system resources by utilising cgroups. The proper allocation of hardware resources across jobs and users can improve overall efficiency. All processes on a Linux system are child processes of a common parent: the process, which is executed by the kernel at boot time and starts other processes. **init** The scopes are transient. They are not permanent. We can make them permanent if needed. To check this we can use the command . It shows everything including constraints, if any. By default, there are no constraints. We can also see all the running tasks/commands. **$ systemctl status session-1.scope** By default we can run 8 commands parallelly. The directory contains an array of options and information concerning various aspects of the file system, including quota, file handle, inode, and dentry information. **/proc/sys/fs/** We can check the transient unit file by $ systemctl cat session-1.scope Contents of systemctl cat session-1.scope Here we can see which means we can run infinite tasks. To modify this/set scope, we can either go to the path mentioned in the first line of output or we can tell systemctl that we want to edit this file by . They create a new file for us, we can mention whatever we want to override in this file. Just like inheritance in Object Oriented Programming, it creates a child file and inherits everything and then overrides. TasksMax=inifinty $ systemctl edit session-1.scope We will need to use for working. sudo Note: We can type to run the command from the history to save time. $ !<command_number> There exists a directory inside it shoes us everything where cgroup can be applied. **/cgroup** **/proc/sys/fs/** , Contents of cgroup directory In the directory we have which inturn contains directory. cgroup pids directory user.slice user.slice/ All the internal information is contained here. pids.current holds the information about the number of process running at the moment and pids.max tells the maximum limit. For each user there’s a different slice. Here user-1000 as the uid is 1000. All the sessions are present in the directory. user-100.slice user-1000.slice/ So, instead of using command we can pick up everything from here. loginctl Ctl+R gives reverse search in history. We can type and then use arrow keys. Note: We can use to know the logs of the scope. (-u for telling it’s a unit) $ journalctl -u session-1.scope Now, if we want to add any constraint to the unit, First let’s check the unit, $ systemctl status user-1000.slice To check the default settings we can use, $ systemctl cat user-1000.slice In every unit, we can use the command to get the entire details. $ systemctl show user-1000.slice We can edit the unit file by either going to the path mentioned in the output or by using $ sudo systemctl edit user-1000.slice The path is The after user in the path means ‘for all users’. Note: /usr/lib/systemd/system/user-.slice.d/10-defaults.conf. “ — “ Since there’s no separate file for the user we need to use i.e., the command becomes ‘- - force’ $ sudo systemctl edit - -force user-1000.slice For example, to check any current memory used we can use, $ systemctl show user-1000.slice -p MemoryCurrent We can change limits without going into the file. For example, to change MemoryLimit, $ sudo systemctl set-property user-1000.slice MemoryLimit=3G We need to restart our slices for the command to work, $ sudo systemctl daemon-reload. To stop/kill/logout any session we can simply, $ sudo loginctl kill-session <session_number> To pause the session, It won’t logout but the terminal will freeze. To unfreeze we can simple modify $ sudo loginctl kill-session <session_number> - -signal=SIGSTOP. - -signal=SIGCONT. The command searches back through the file and displays a list of all users logged in (and out) since that file was created. $ last /var/log/wtmp The command functions similarly to . By default, lists the contents of file , which contains all bad login attempts made on the system. $ lastb last lastb /var/log/btmp The command prints the contents of the last login log file, /var/log/lastlog, including the login name, port, and last login date and time. $ lastlog To record the terminal we can use the command We can install it using To start, . We can stop by typing exit. To play, $ tlog-rec. $ sudo yum install tlog-rec. $ tlog-rec - -file-path=my.log $ tlog-play - -file-path=my.log We often come across the situation where we wish to run multiple commands in terminal at a time like monitoring using $ ps -aux and running other commands parallelly. In order to do this we generally open a new terminal window but that creates a new session. To overcome this challenge we use TMUX To list all sessions, (or) $ tmux ls $ tmux list-session To create a session, A new window is created $ tmux. The beauty of the windows is that they are created by tmux. We can split these windows into multiple parts. We use to move between the windows. We use ) to split the window into second part. If we run we can see that we are still in the same session. Ctl+B and arrows Ctl+B double quotes(“ $ loginctl to go back to the window. Here 0 is the number of window. We can check it using $ tmux attach -t 0 $ tmux ls Detach from currently attached Session: Ctrl+ b d Ctrl+ b (or) :detach Screen: Ctrl+ a Ctrl+ d Ctrl+ a (or) :detach to check the windows. $ tmux list-windows To create a new session in tmux, If we don’t give the name it’ll just be named as the next number i.e.,1 $ tmux new -s <name> . To rename any session, $ tmux rename-session -t 0 <new_name> To kill the tmux process itself, $ tmux kill-session [-t session_name] If we close the terminal, tmux stays alive so we don’t lose the session and the work. Cheatsheet: txmuxcheatsheet.com No operating system tracks when we move around in directories. But, if we track system calls, we can track anything. For example, .This is not recorded in the logs. But performs a system call to read the data from the hard disk and this requires system calls. Now, if we create logs for system calls we can track it. $ cat /etc/passwd $ cat One of the powerful logging tools include Splunk. There are two different parts: User Space(Programs. Ex: Vi ) and Kernel Space(System Calls. Ex: read operation). Context switching keeps happening behind the scene between user space and kernel space. command tells us everything about the commands. Ex: $ strace $ strace cat /etc/passwd. the option ‘-c’ shows only calls. $ strace -c cat /etc/passwd, Another powerful commands include . Netcat $ nc -l <Port_Number> #Server$ nc <IP_Address> <Port_Number> #Client The above commands set up a chat server. As soon as any one exits, the connection gets closed. To avoid this we use in the server command. ‘- -keep-open’ will transmit only “/usr/bin/free” as message to the client. **$ nc - -exec /usr/bin/free - -keep-open -l <port>** Using “-vv” in nc command will show the server the details about the client connecting. nc command is not available in Windows OS. Alternatively, we can use telnet in command prompt. Search -> Turn Windows features on or off -> Enable Telnet. This enables the client to access the shell of the server. This is a common way of implementing backdoor access/remote shell by hackers without username and password. **$ nc - -exec /bin/bash - -keep-open -l <port>** : The option helps server permit only the white listed IP Address to access. ‘- -allow’ : Contains all the security logs i.e., successful and failed login attempts, opening new terminal, etc. /var/log/secure : Keeps the log open in real time and we can monitor it. $ tail -f /var/log/secure Helps us send our logs in real time. **$ nc - -exec "/usr/bin/tail -f /var/log/secure/ - -keep-open -l <port>** : It’s ideal to use it with tmux as even if we close putty/terminal is disconnected, the connect stays active. . This provides broker facility to netcat. Usage: $ sudo install socat $ ncat - - broker - -listen -p <port> to check all the system calls. $ strace -c nc -l <port> : is the command to creating log rules for monitoring. Auditctl : This command will keep tracking the file if anyone reads, writes or access the file. It will audit/create log. The log will be created in /var/log/audit. Since it’s a secure directory we need to access it from root account. We can attach key/tag using To check the rules. $ auditctl -w /etc/passwd -p rwa “-k <key>”. $ sudo auditctl -l: Every syscall has its id. For example, syscall=257. We can google it to check. : gives process id. Let pid = 22758 $ ps -C nc tracks in real time. We can add “-e watch” to track the specific call. $ strace -p 22758 : To replace pid of nc automatically. $ strace -p $(pidof nc) -e write : We need to specify the architecture for system calls. $ auditctl -a always,exit -F arch=b64 -S write -S bind -k mync : In the logs directory to search the logs with the key and grab bind ones. $ ausearch -k mync | grep bind : This way we can create rules for everything we need. provides a means for a process to specify a filter for incoming system calls. Seccomp filtering(SECure COMPuting with filters) : To check the underlying details about the hardware, in this case the memory. $ sudo dmidecode - -type memory Proc directory(cd /proc/) contains all the data of the RAM. When the system boots up, the RAM is mounted to this folder. All the folders in this directory are processes with the names as PID. We can further enter any of the folders and examine the data. For example, the status file in the PID folder tells us a lot of information about the process. The system consists of stack memory and heap memory. Memory is provided from the heap for the data in the programs. allocates memory. After the program finishes the memory should be deallocated. In case memory is not deallocated, this is known as malloc() memory leakage. To check memory leaks we use a powerful tool known as Valgrind. Valgrind is a suite of tools for debugging and profiling programs. It’s an instrumentation framework for building dynamic analysis tools. There are Valgrind tools that can automatically detect many memory management and threading bugs, and profile your programs in detail. You can also use Valgrind to build new tools. The Valgrind distribution currently includes seven production-quality tools: a memory error detector, two thread error detectors, a cache and branch-prediction profiler, a call-graph generating cache and branch-prediction profiler, and two different heap profilers. It also includes an experimental SimPoint basic block vector generator. To check memory leakage in the program. $ valgrind - -leak-check=full - -tool=memcheck ./<program_name> : Need for Privileged Programs — The Set-UID Mechanism Whenever we use ‘>’ symbol is used to save data. For example, date > hi.txt. The symbol is known as in shell. read direction symbol At times we get Permission Denied even after using sudo. To overcome this, $ sudo bash -c “cat > hello.txt” Another need for privileged program: Set-UID Concept includes:•Allow user to run a program with the program owner’s privilege.•Allow users to run programs with temporary elevated privileges. Every process has two User IDs. Real UID (RUID): Identifies actual owner of the process Effective UID (EUID): Identifies privilege of a process. Access control is based on EUID. A Set-UID program is similar to any other program, with the exception of a single bit called the Set-UID bit. $ ls -l /usr/bin/cat : We notice that SUID is not set. If we have s beside w then we can say that we have SUID. (Here, c specifies user area) $ chmod u+s /usr/bin/cat Now SUID is set. We can run the cat command without sudo. We can replace “+” with ” —” to remove the permission. $ chmod u-s /usr/bin/cat Note: Shorcut to make typing invisible Ctl+S and to make it visible again Ctl+U To kill the httpd process: $ kill -9 <pid> 9 Means KILL signal that is not catchable or ignorable. In other words it would signal process (some running application) to quit immediately. SIGKILL just happened to get the number 9. So, it is equivalent to $ kill -SIGKILL <pid> $ kill -l SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3 SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8 SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13 SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12 SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7 SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2 SIGRTMAX-1 64) SIGRTMAX To stop all the httpd services $ sudo kill -9 $(pidof httpd) : $ cd /usr/lib/systemd/system This file is used to maintain everything about httpd. This is also known as service unit file. We can write our rules in this file. $ vi httpd.service : cgroup can be used instead of command. $ systemctl status httpd.service $ ps can be used to start the service. $ sudo systemctl start httpd We don’t have to use ps command, httpd command, netstat command, etc. to stop the service. $ sudo systemctl stop httpd to check all the logs of the unit file. $ journalctl -u httpd.service So, having the unit file will make things simple. to see all the cgroups. $ sudo systemctl show httpd.service Same as the first section for modifying the cgroup. Other way, $ cd /etc/systemd/system $ sudo mkdir httpd.service.d $ cd httpd.service.d $ vi vim.conf grep command is case sensitive. To make it case insensitive we use Note: “-i” This way is better as if we just want to revert the changes we can simply delete these files. Since we created new file, we need to reload the program. $ sudo systemctl daemon-reload ; $ sudo systemctl restart httpd Everything is started by Systemd Whenever we give the restart or shutdown command they all contact systemd. is used to identify and compare configuration files that override other configuration files. Files in have highest priority, files in have the second highest priority, …, files in have lowest priority. Files in a directory with higher priority override files with the same name in directories of lower priority. In addition, certain configuration files can have " " directories which contain "drop-in" files with configuration snippets which augment the main configuration file. "Drop-in" files can be overridden in the same way by placing files with the same name in a directory of higher priority (except that, in case of "drop-in" files, both the "drop-in" file name and the name of the containing directory, which corresponds to the name of the main configuration file, must match). $ systemd-delta /etc/ /run/ /usr/lib/ .d : Pause the process. $ sudo kill -s STOP $(pidof httpd) : Resume the process. $ sudo kill -s CONT $(pidof httpd) $ ps -o pid,stat,comm,rss,%cpu -C httpd - -sort=-rss : Check which processor is it using and other details. $ tuna -t httpd -P We can make changes in the .conf file to change the CPU binding. To run the command. It creates unit files too so we can set constraints. $ sudo systemd-run date : $ sudo systemd-run -p MemoryLimit=1G -p CPUAffinity=1 date gives command which can be used to search for files. But, for this command to work we need to create a database. $updatedb. It goes inside all folders and files and the entire information is stored in database. $ updatedb command hangs up the system for 1 or 2 minutes as it uses entire hard disk. $ sudo yum install mlocate $ locate command has to go to all files always which makes it very slow. $ find Limit for IO operations to hard disk. If hard disk speed is 100MBPS, this will make it 10MBPS so that other programs/system doesn’t hang up. $ sudo systemd-run BlockIOWeight=10 updatedb : to check the affinity i.e., to which cpu is it connected to. $ sudo taskset -p -c <pid> To change the affinity without modifying the .conf file. We don’t have to restart the process after using this command. It’s a great command to change the affinity on the fly. $ sudo taskset -p -c 0,1 <pid> : : To clear the cache. $ sudo bash -c “echo 3 > /proc/sys/vm/drop_caches” is limited whereas has more functionalities. $ ulimit cgroup command shows details about the hardware in the system. $ lshw : Entire system information in HTML format. $ sudo bash -c “lshw -html > os.html” Kernel maintains all the information about devices in directory: $ /sys/devices/ Ouput : 1 says it’s online. $ cd system/ ; $ cd cpu/ ; $ cpu1/ ; $ cat online : Change to 0 and it stops running. To verify, $ sudo vi cpu1/online : $ nproc : to bring it back online. $ sudo bash -c “echo 1 > cpu1/online” is the directory where we can go and do almost all customizations on the fly. /sys/devices/ $ systemctl -t slice There are two main slices: System Slice and User Slice. A unit is a concept for hierarchically managing resources of a group of processes. This management is performed by creating a node in the Linux Control Group (cgroup) tree. Units that manage processes (primarily scope and service units) may be assigned to a specific slice. For each slice, certain resource limits may be set that apply to all processes of all units contained in that slice. Slices are organized hierarchically in a tree. The name of the slice encodes the location in the tree. The name consists of a dash-separated series of names, which describes the path to the slice from the root slice. The root slice is named -.slice. Example: foo-bar.slice is a slice that is located within foo.slice, which in turn is located in the root slice -.slice. slice : Tree structure detail $ systemd-cgls : Shows live status of control group like the number of tasks running, %CPU, Memory, etc. $ systemd-cgtop : This command will run unlimited time. It’ll pick garbage data and put into garbage data. It’ll utilize the complete CPU. It is used for stress testing. $ dd if=/dev/zero of=/dev/null $ cd /etc/systemd/system ; $ vi s1.service : [Unit]description=my stress program [Service]ExecStart=dd if=/dev/zero of=/dev/nul $ sudo systemctl daemon-reload The same command will run as service now. Behind the scene, it’s going to work in the system slice. We can verify this by (or) $ sudo systemctl start s1.service : $ sudo systemd-cgtop $ sudo systemctl status s1.service Since no other process is working, it’s utilizing 100% CPU. Now, if we start another program, for example a while loop in bash. Now, in this case too the user wants to use 100% CPU. It’s kernel’s responsibility to allocate resources. On the fly, it starts sharing the CPU. It’s called CPU sharing. Since there are two CPU i.e., two cores, the two processes are distributed among the two CPUs. We can use command to check the CPU idle percentage. This command will update every 1 second. $ vmstat 1 Now, if we have only 1 CPU, to demonstrate this we can make our CPU1 offline as shown earlier. Since we have 1 CPU now and both are asking for 100%, cgroup decides how to allocate the resources. If we check the status we notice cgroup provides around 75% to user and 25% to user. We can add CPUShares=1024 (or) 512 (or) 2048CPUSchedulingPolicy=FIFO $ sudo yum install sysstat gives information about all the CPUs/cores. $ mpstat -P ALL: It sets or retrieves the real-time scheduling attributes of an existing PID, or runs the command with the given attributes. $ chrt -m ; $ chrt -m : $ sudo chrt -d - -sched-runtime 6000000 - -sched-deadline 10000000 - -sched-period 20000000 0 dd if=/dev/zero of=/dev/null $ sudo yum install perf is a lightweight CPU profiling; it checks CPU performance counters, tracepoints, upprobes, and kprobes, monitors program events, and creates reports. Perf $ sudo perf sched record -- sleep 20$ sudo perf sched latency$ sudo perf sched map$ sudo perf sched timehist Tells how many different unit files are available. $ systemctl list-unit-files : Timer is like Crontab: # Scheduling the job to run after 60 seconds.$ sudo systemd-run --on-active=60 touch /tmp/hhhh.txt $ sudo yum install testdisk $ testdisk #Useful for recovering the deleted partitions : Collects a lot of Performance metrics and shows in a graphical dashboard. $ sudo yum install cockpit : It’s a tool that collects performance metrics and connects to cockpit behind the scene. $ systemctl restart pmlogger It generally works on port 9090. $ systemctl start cockpit.socket : to access the cockpit dashboard. http://<IP>:<Port> : Hard disk status. $ iostat : For storage related to processes. $ iotop can be used to monitor Linux system’s resources like CPU usage, Memory utilization, I/O devices consumption, Network monitoring, Disk usage, process and thread allocation, battery performance, Plug and play devices, Processor performance, file system and more. Linux system Monitoring and analyzing aids understanding system resource usage which can help to improve system performance to handle more requests. $ sar : : Network Packet Monitoring $ tcpdump $ tcpdump tcp port 80 -n -X BONUS: Automatically Correct Mistyped Directory Names Use shopt -s cdspell to correct the typos in the cd command automatically as shown below. If you are not good at typing and make lot of mistakes, this will be very helpful. $ cd /etc/mall-bash: cd: /etc/mall: No such file or directory $ shopt -s cdspell $ cd /etc/mall $ pwd/etc/mail I hope this helps you increase your productivity. Connect with me: ; Linkedin GitHub Also Published Here