Home Blog Page 21

EEVDF Scheduler On The Verge Of Being “Complete”

0
GNOME System Monitor

Merged one year ago for Linux 6.6 was the EEVDF scheduler as a replacement to the CFS code and designed to provide a better scheduling policy for the kernel and being more robust. With a new set of patches for this “Earliest Eligible Virtual Deadline First” scheduling code, it’s nearing the point of officially being completed.
While EEVDF has been part of the mainline Linux kernel for a year, there have been patches since then for evolving this scheduler code with its ideas originating from a late 90’s research paper.

Peter Zijlstra has been spearheading much of this work and today he dropped a set of 24 patches that he hopes will be the final version of the EEVDF patches. Zijlstra wrote on this Saturday patch series entitled “Complete EEVDF”:
“So after much delay this is hopefully the final version of the EEVDF patches. They’ve been sitting in my git tree for ever it seems, and people have been testing it and sending fixes.
I’ve spend the last two days testing and fixing cfs-bandwidth, and as far as I know that was the very last issue holding it back.
These patches apply on top of queue.git sched/dl-server, which I plan on merging in tip/sched/core once -rc1 drops.
I’m hoping to then merge all this (+- the DVFS clock patch) right before -rc2.”
So if all goes according to plan, EEVDF will be “completed” but that’s not to say new optimizations or other features — and fixes — may be tacked on still after that point.

These 24 patches have some code cleanups, implementing delayed dequeue and DELAY_ZERO and ENQUEUE_DELAYED features, and other changes. There’s also a patch at the end for helping to better measure thread time in a DVFS (Dynamic Voltage and Frequency Scaling) world due to the dynamically changing clock speeds of any modern processors.

7 Amazing Things You Can Do with a Linux Home Server

0
7 Amazing Things You Can Do with a Linux Home Server

Linux can run almost everywhere. If something does not run Linux, it can be made to 😉And, one of its most useful applications happens to be a Linux Home Server. Sure, you can use Windows as a home server as well, but Linux can be a reliable option for the long run. In this article, I will list multiple uses of a Linux Home server and hopefully convince you to set yours up today.What is a Linux Home Server?A private server hosted locally that can be accessed using the home network or over the internet is a home server. And, in this case, Linux powers it. One of the main advantages of having a Linux home server is the ability to have total control and privacy of your data and media streaming activities.Sometimes a Linux home server is also referred to as a Homelab (more on this as you read on):What is a Homelab and Why Should You Have One?Having a homelab setup has multiple advantages. Learn what it is and why you should consider a homelab for yourself.Numerous open-source software programs are available, to equip your home server to be tailored for a specific use-case. For example, Plex or Kodi can be used as media server, Samba can be used to share files, and Nextcloud can be used to collaborate and synchronize files.Setting up a home server is beyond the scope of this article. However, Ubuntu as the Linux distribution should be a safe bet to power your hardware. After that is done, you would need to choose from the many open-source free software available for your home servers, and then get started.Now that we know what a Linux home server is — what exactly are the uses for one? Let me highlight some:1. Your Private Cloud StoragePerhaps the most widely used feature of a Linux home server, file storage, allows you to store and share, files, documents, photos, videos, and more. Additionally, there is no risk of privacy because it is your very own cloud server.Of course, you are responsible for backing up files or setting up a RAID configuration. So, you need to invest a significant amount of time in learning the tech to keep your files safe.You can access your Linux cloud server from anywhere on the globe and on any device. That means you are only a few keystrokes away from your server.Nextcloud should be the perfect open-source app to help you create your very own cloud server.2. Smart Home ControlHaving one remote for all your smart home appliances is an idea that everyone dreams of. Well, with a Linux home server, this dream can become a reality. With a Linux server, you can create a control hub for all your home appliances like, your thermostat, smart bulbs, CCTV cameras, smart fans, air conditioners, and all the devices that run on a network. There are various home automation software like Home Assistant that you can configure to achieve this.Why go through the hassle of sharing your media files across all your devices when you can just put them up on a Linux home server? You can say goodbye to the streaming services to watch your favorite shows as well.Options like Jellyfin help make you a robust local media streaming solution. You can access it through your home network or over the internet (with advance configurations in place).Setting up Jellyfin Media Server on Raspberry PiPut your Raspberry Pi to a good use by setting up local media streaming with Jellyfin.4. Network Security, Ad blocker or MonitoringYou can use your Linux home server to run a network security software or monitor your devices/network if you know how to do it.Even if you are not a cybersecurity enthusiast, you can set up a popular open-source software called Pi-hole that you can use to block ads and trackers. And, software like Shorewall can help you create firewalls.All in all, you can use your Linux home server to secure your devices from malware, vulnerabilities, and more.How to Set Up Pi-hole to Get an Ad-free LifePi-hole is a DNS-based advertisement blocker. Unlike a Chrome or Firefox extension, a Pi-hole can block ads even on your TV! So let’s see how to install and take advantage of this amazing tool! What is Pi-hole? Pi-hole is a DNS Server. It blocks advertisement serving domains. Set it5. Development and TestingIf you are a developer, Linux home servers are nothing short of a paradise for you. With many testing environments and database hosts, Linux servers allow you to create the best models. As I mentioned in the intro section, we also call a home server as a home lab. You can choose to use it interchangeably, but I believe it is accurate to use the term when you are into testing tools, learning, and developing stuff.We also have a relevant guide for you to help get started if you are interested:ZimaBoard Turned My Dream of Owning a Homelab into RealityGet control of your data by hosting open source software easily with this plug and play homelab device.6. Game HostingLinux Home servers pack something for everyone. If you are a gamer, Linux servers are more than enough for hosting private multiplayer game servers.You can either decide to create your own game server for personal use-cases or set up a commercial game server that helps you earn money (like the Counter-Strike servers).A custom game server allows you to customize your multiplayer experience. And, it should be a fun experience to do that if you know what you are doing.7. Print ServersLinux home servers can act as a centralized print management platform. This will help you keep track of all your printing tasks.With a Linux server, you can do all kinds of shared printing where different devices can use the same printer. Moreover, it is not just one printer, as one Linux home server can help you manage several printers at once. Of course, this can only be a feasible use-case if you have a small office type setup with multiple printers. But, it is an interesting one.ConclusionYou can get creative with the uses of a home server. Whether you want to store files, build a solution to automate things at your home, setup security tools, Linux home server is the way to go.There is always something or the other that will cater to your needs. It is also very cost-efficient. However, it requires some amount of technical knowledge to set it up and maintain it when required.💭 Did I miss any of your favorite ways to use a Linux Home server, do let me know in the comment below!

Author Info

Swayam Sai Das is a student exploring the realms of Linux as an Intern Writer at It’s FOSS. He is dedicated, when trying to push ranks in FPS games and enjoys reading literature classics in an attempt of putting on an academic facade.

DreamQuest N95 Mini PC Running Linux: Benchmarks

0
DreamQuest N95 Mini PC Running Linux: Benchmarks

This is a multi-part blog looking at a DreamQuest N95 Mini PC running Linux. The model we’re testing has an Intel N95 processor, 32GB of DDR4 RAM, and a 1TB M.2 SSD. It sounds like an inexpensive machine to run Linux.
This article benchmarks the DreamQuest N95 Mini PC. The tests are run using the Phoronix Test Suite unless otherwise stated. Rather than compare the DreamQuest’s performance against processors found in modern mini PCs, we’re going to take a different approach here.
We benchmark the machine against a server/workstation, a tiny desktop PC, and a fairly old mini PC. We want to see how well the DreamQuest Mini PC might function say as a home server or as a desktop replacement.

Each machine is tested with the same software and configured to ensure consistency between results. All power management functionality is disabled when running the benchmarks. For ease of reference, the system’s specifications are listed on the final page together with links to all articles in this series.
Let’s start with some general system tests.

$ phoronix-test-suite benchmark build-linux-kernel
The benchmark show how long it takes to build the Linux 6.8 kernel in a default configuration. The test uses all cores/threads of a PC, but it’s not the type of test where CPU cores run at 100%. A lot of the time, the compiler is waiting on other things like RAM and disk. It’s therefore a good indicator of the general system performance of a machine.
The DreamQuest N95 puts in an admirable performance given that its CPU is a mere 15W TDP whereas the Xeon and HP i5-6500T machines are 105W TDP and 35W TDP respectively. The N95 runs the HP fairly close. Naturally, the Xeon machine compiles the kernel quicker courtesy of its much higher number of cores (12 cores 24 threads).

$ phoronix-test-suite benchmark encode-flac
The DreamQuest machine again runs the HP machine fairly close, and trashes the Xeon into the bargain. The benchmark only uses a single core. The Xeon is even beaten comfortably by the i3-5005U Mini PC which highlights the deficiencies of the Xeon CPU with software running on a single core.
This benchmark indicates that the DreamQuest machine will make a good home server or as a desktop machine given that for most of the time its 4 cores will not be maxed out.
Next page: Page 2 – Processor
Pages in this article:Page 1 – Introduction / SystemPage 2 – ProcessorPage 3 – Memory / GraphicsPage 4 – Disk / WiFiPage 5 – Specifications

Complete list of articles in this series:

DreamQuest N95 Mini PC

Part 1Introduction to the series with an interrogation of the system

Part 2Benchmarking the DreamQuest N95 Mini PC

SSH Hardening on MikroTik L009UiGS-2HaxD | Lisenet.com :: Linux | Security

0
Lisenet.com :: Linux

The time has come to update our good old 2011UAS-2HnD-IN with L009UiGS-2HaxD.
SSH Hardening
MikroTik L009UiGS-2HaxD comes with RouterOS v7. As of RouterOS v7.7, you can enable support for Ed25519 key exchange as well as disable SHA1 usage with strong crypto.
Enabling strong crypto (which is disabled by default) does the following:

Prefers 256 and 192 bit encryption instead of 128 bits.
Disables null encryption.
Prefers sha256 for hashing instead of sha1.
Disables md5.
Uses 2048bit prime for Diffie Hellman exchange instead of 1024bit.

SSH into the router and run the following command:
/ip ssh set allow-none-crypto=no host-key-size=4096 host-key-type=ed25519 strong-crypto=yes
Generate new and replace current set of private keys on the router:
/ip/ssh/regenerate-host-key
Use ssh-audit to verify:
$ ./ssh-audit.py mikrotik.hl.test
# general
(gen) banner: SSH-2.0-ROSSSH
(gen) compatibility: OpenSSH 7.4+, Dropbear SSH 2020.79+
(gen) compression: disabled

# key exchange algorithms
(kex) curve25519-sha256 — [info] available since OpenSSH 7.4, Dropbear SSH 2018.76
`- [info] default key exchange from OpenSSH 7.4 to 8.9
(kex) diffie-hellman-group-exchange-sha256 (2048-bit) — [warn] 2048-bit modulus only provides 112-bits of symmetric strength
`- [info] available since OpenSSH 4.4
(kex) ext-info-s — [info] pseudo-algorithm that denotes the peer supports RFC8308 extensions

# host-key algorithms
(key) ssh-ed25519 — [info] available since OpenSSH 6.5, Dropbear SSH 2020.79

# encryption algorithms (ciphers)
(enc) aes192-ctr — [info] available since OpenSSH 3.7
(enc) aes256-ctr — [info] available since OpenSSH 3.7, Dropbear SSH 0.52
(enc) [email protected] — [info] available since OpenSSH 6.2

# message authentication code algorithms
(mac) hmac-sha2-256 — [warn] using encrypt-and-MAC mode
`- [info] available since OpenSSH 5.9, Dropbear SSH 2013.56
(mac) hmac-sha2-512 — [warn] using encrypt-and-MAC mode
`- [info] available since OpenSSH 5.9, Dropbear SSH 2013.56

# fingerprints
(fin) ssh-ed25519: SHA256:OdM8KZKPh0BM0N1iQiSZZgeIkNPHodPfgWoS6tkb7JI

This entry was posted in Networking and tagged L009UiGS-2HaxD, Mikrotik. Bookmark the permalink. If you notice any errors, please contact us.

Nagios installation on Centos 7 part 2 (installing plugins and NRPE)

0
The Linux Juggernaut

IntroductionIn our previews article we walked you through installing nagios core on a Centos 7 system. In this article we will explain how to install Nagios plugins and the Nagios Remote Plugin Executor (NRPE) package.How does Nagios work?Nagios core runs from a central server which holds the configuration files. It runs active checks to monitor the state of services like HTTP and SSH, check if the server is up via ICMP and also monitor resource consumption like in the form of CPU load, memory utilization etc. The core server has a huge library of plugins and much of the functionality and flexibility of Nagios is derived from the use of these plugins.What are Nagios plugins?Plugins are compiled executable or scripts (Perl scripts, shell scripts, etc.) that can be run from a command line to check the status or a host or service.  Nagios uses the results from plugins to determine the current status of hosts and services on your network. Nagios will execute a plugin whenever there is a need to check the status of a service or host.  The plugin does something to perform the check and then simply returns the results to Nagios. It will process the results that it receives from the plugin and take any necessary actions.Installing Nagios pluginsThe plugins which provide the most commonly needed and used monitoring checks are available as a tarball which we will download and install. At the time of this writing, the latest version is Nagios Plugins 2.1.1.[ssuri@linuxnix:~] $ curl -L -O http://nagios-plugins.org/download/nagios-plugins-2.2.1.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed

0 0 0 0 0 0 0 0 –:–:– –:–:– –:–:– 0
0 2664k 0 1213 0 0 1330 0 0:34:11 –:–:– 0:34:11 1331
36 2664k 36 975k 0 0 537k 0 0:00:04 0:00:01 0:00:03 537k
100 2664k 100 2664k 0 0 1071k 0 0:00:02 0:00:02 –:–:– 1072k
[ssuri@linuxnix:~] $ tar xvf nagios-plugins-2.2.1.tar.gz
nagios-plugins-2.2.1/
nagios-plugins-2.2.1/perlmods/
nagios-plugins-2.2.1/perlmods/Config-Tiny-2.14.tar.gz
nagios-plugins-2.2.1/perlmods/parent-0.226.tar.gz
nagios-plugins-2.2.1/perlmods/Test-Simple-0.98.tar.gz
nagios-plugins-2.2.1/perlmods/Makefile.in
nagios-plugins-2.2.1/perlmods/version-0.9903.tar.gz
nagios-plugins-2.2.1/perlmods/Makefile.am
—————————————–output truncated for brevity

[ssuri@linuxnix:~] $ cd nagios-plugins-2.2.1
[ssuri@linuxnix:~/nagios-plugins-2.2.1] $ ls
ABOUT-NLS AUTHORS config.h.in configure.ac INSTALL Makefile.am nagios-plugins.spec.in perlmods plugins-scripts REQUIREMENTS THANKS
acinclude.m4 build-aux config.rpath COPYING LEGAL Makefile.in NEWS pkg po SUPPORT tools
ACKNOWLEDGEMENTS ChangeLog config_test FAQ lib mkinstalldirs NPTest.pm plugins README tap
aclocal.m4 CODING configure gl m4 nagios-plugins.spec NP-VERSION-GEN plugins-root release test.pl.inNow we will configure the plugins using the configure script provided with the tarball.[ssuri@linuxnix:~/nagios-plugins-2.2.1] $ ./configure –with-nagios-user=nagios –with-nagios-group=nagios –with-openssl
[ssuri@linuxnix:~/nagios-plugins-2.2.1] $ sudo
checking for a BSD-compatible install… /usr/bin/install -c
checking whether build environment is sane… yes
checking for a thread-safe mkdir -p… /usr/bin/mkdir -p
checking for gawk… gawk
checking whether make sets $(MAKE)… yes
checking whether to disable maintainer-specific portions of Makefiles… yes
checking build system type… x86_64-unknown-linux-gnu
checking host system type… x86_64-unknown-linux-gnu
checking for gcc… gcc
checking for C compiler default output file name… a.out
checking whether the C compiler works… yes
checking whether we are cross compiling… no
checking for suffix of executables…
—————————————–output truncated for brevityNow compile Nagios Plugins with this command:[ssuri@linuxnix:~/nagios-plugins-2.2.1] $ makeThen install it with the ‘make install’ command.[ssuri@linuxnix:~/nagios-plugins-2.2.1] $ sudo make installInstalling NRPENRPE allows you to remotely execute Nagios plugins on other Linux/Unix machines. This allows you to monitor remote machine metrics (disk usage, CPU load, etc.). We talk about NRPE in depth in another article. For now we will explain how to install it on our Centos 7 system. We will download the NRPE source code using curl and then configure, compile and install it from source as we’ve done with Nagios core and Nagios plugins earlier.[ssuri@linuxnix:~] $ curl -L -O http://downloads.sourceforge.net/project/nagios/nrpe-2.x/nrpe-2.15/nrpe-2.15.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed

0 0 0 0 0 0 0 0 –:–:– –:–:– –:–:– 0
100 349 100 349 0 0 1376 0 –:–:– –:–:– –:–:– 1384
[2018-10-09 12:34:08]
0 0 0 0 0 0 0 0 –:–:– –:–:– –:–:– 0
100 409k 100 409k 0 0 395k 0 0:00:01 0:00:01 –:–:– 714k
[ssuri@linuxnix:~] $ tar xvf nrpe-*.tar.gz
nrpe-2.15/
nrpe-2.15/Changelog
nrpe-2.15/LEGAL
nrpe-2.15/Makefile.in
nrpe-2.15/README
nrpe-2.15/README.SSL
nrpe-2.15/README.Solaris
nrpe-2.15/SECURITY
—————————————–output truncated for brevity[ssuri@linuxnix:~/nrpe-2.15] $./configure –enable-command-args –with-nagios-user=nagios –with-nagios-group=nagios –with-ssl=/usr/bin/openssl –with-ssl-lib=/usr/lib/x86_64-linux-gnu
checking for a BSD-compatible install… /usr/bin/install -c
checking build system type… x86_64-unknown-linux-gnu
checking host system type… x86_64-unknown-linux-gnu
checking for gcc… gcc
checking for C compiler default output file name… a.out
checking whether the C compiler works… yes
checking whether we are cross compiling… no
checking for suffix of executables…
checking for suffix of object files… o
checking whether we are using the GNU C compiler… yes
checking whether gcc accepts -g… yes
checking for gcc option to accept ANSI C… none needed
checking whether make sets $(MAKE)… yes
checking how to run the C preprocessor… gcc -E
checking for egrep… grep -E
checking for ANSI C header files… yes
checking whether time.h and sys/time.h may both be included… yes
checking for sys/wait.h that is POSIX.1 compatible… yes
checking for sys/types.h… yes
—————————————–output truncated for brevityNow build and install NRPE and its xinetd startup script with these commands:[ssuri@linuxnix:~/nrpe-2.15] $ make all
[ssuri@linuxnix:~/nrpe-2.15] $ sudo make install
[ssuri@linuxnix:~/nrpe-2.15] $ sudo make install-xinetd
[ssuri@linuxnix:~/nrpe-2.15] $ sudo make install-daemon-configEdit the /etc/xinetd.d/nrpe file using vi or any other editor of your choice and add the IP address of the Nagios core server in the only_from directive.[ssuri@linuxnix:~] $ cat /etc/xinetd.d/nrpe
# default: on
# description: NRPE (Nagios Remote Plugin Executor)
service nrpe
{
flags = REUSE
socket_type = stream
port = 5666
wait = no
user = nagios
group = nagios
server = /usr/local/nagios/bin/nrpe
server_args = -c /usr/local/nagios/etc/nrpe.cfg –inetd
log_on_failure += USERID
disable = no
only_from = 127.0.0.1 192.168.87.134
}
[ssuri@linuxnix:~] $This will allow Nagios core to communicate with NRPE.Restart the xinetd service to start NRPE:[ssuri@linuxnix:~] $ sudo service xinetd restart ConclusionWith this we’ve complete the second and third steps involved in getting our Nagios core installation up and running. In the next article we will go through the final phase towards completing our setup and that is the Nagios core configuration. Post Views: 2,380The following two tabs change content below.He started his career in IT in 2011 as a system administrator. He has since worked with HP-UX, Solaris and Linux operating systems along with exposure to high availability and virtualization solutions. He has a keen interest in shell, Python and Perl scripting and is learning the ropes on AWS cloud, DevOps tools, and methodologies. He enjoys sharing the knowledge he’s gained over the years with the rest of the community.

Embracing the Power and Versatility of Linux: A Comprehensive Exploration of Its Benefits – NoobsLab

0
Embracing the Power and Versatility of Linux: A Comprehensive Exploration of Its Benefits - NoobsLab

In an era dominated by technology, the choice of an operating system holds significant importance. Linux, an open-source operating system, has gained immense popularity due to its numerous advantages over proprietary alternatives. Developed collaboratively by a global community of developers, Linux offers a host of benefits that have attracted individuals, businesses, and even governments. In this comprehensive article, we will delve into the diverse range of advantages Linux brings to the table, making it an appealing choice for users worldwide.

1. Open-source and Cost-effectiveness
At the core of Linux lies its open-source nature, a defining characteristic that sets it apart from proprietary operating systems. The availability of its source code allows users to modify, distribute, and enhance the operating system according to their needs. This open philosophy fosters innovation and collaboration, creating a vibrant ecosystem of developers and enthusiasts who contribute to its continuous improvement. Additionally, Linux is often free or available at a significantly lower cost compared to proprietary operating systems, making it an attractive option for individuals and organizations operating on limited budgets.

2. Customizability and Flexibility
Linux offers an unparalleled level of customizability and flexibility, catering to a wide range of user preferences and requirements. Unlike proprietary operating systems that come pre-packaged with a fixed set of features and applications, Linux distributions (or “distros”) allow users to choose from a vast array of options. From lightweight distros for older hardware to specialized distributions tailored for specific industries or purposes, Linux empowers users to create a personalized computing environment. Additionally, Linux’s modular nature allows users to install only the necessary components, resulting in a streamlined and efficient system that conserves resources.

3. Stability and Reliability
One of the hallmarks of Linux is its exceptional stability and reliability. Linux-based systems are renowned for their ability to operate continuously for extended periods without requiring frequent reboots. This inherent stability makes Linux an ideal choice for mission-critical systems and servers. Linux powers a significant portion of the internet, serving as the backbone for websites, cloud services, and network infrastructure. Its robustness contributes to enhanced system uptime, reducing maintenance overhead and ensuring uninterrupted operation, even under heavy workloads.

4. Security and Privacy
Linux has garnered a well-deserved reputation for its robust security features. Its open-source nature promotes transparency, allowing users and developers to review and audit the code for security vulnerabilities. The worldwide community of developers actively collaborates to identify and address security flaws promptly, resulting in rapid updates and patches. Linux also incorporates built-in security features, such as access controls, robust user privilege management, and secure remote access protocols, all of which contribute to a more secure computing environment. Furthermore, Linux’s lower market share in comparison to mainstream operating systems makes it a less attractive target for malware and viruses, adding an additional layer of security for users.

5. Vast Software Repository
Linux boasts a vast software repository, offering a diverse range of applications for various purposes. Package managers, such as apt and yum, simplify software installation and updates, eliminating the need for users to search the web for software downloads. The availability of open-source software ensures cost-effectiveness, as users can access and utilize applications without licensing fees. Linux supports a wide range of programming languages, making it a preferred choice for developers and software engineers. Additionally, Linux’s compatibility with popular software development tools and frameworks further enhances its appeal in the software development community.

6. High Performance and Efficiency
Linux is renowned for its efficient resource utilization and optimized performance. Its ability to run on various hardware architectures, from low-powered devices to high-performance servers, demonstrates its versatility. Linux offers better control over system resources, allowing users to prioritize tasks and allocate resources accordingly. Moreover, Linux’s lightweight nature and minimal system requirements make it suitable for older hardware, extending the lifespan of machines that might otherwise be considered obsolete.

7. Community Support and Collaboration
The Linux community is a vibrant and supportive ecosystem, characterized by its inclusivity and knowledge sharing. Countless forums, mailing lists, and online communities provide platforms for users to seek assistance, exchange ideas, and collaborate on projects. The community’s collective expertise serves as a valuable resource for both beginners and experienced users alike. This strong community support fosters a sense of camaraderie among Linux users, empowering individuals to overcome challenges and learn from each other’s experiences.

Conclusion
Linux has emerged as a powerful and versatile operating system, offering a multitude of benefits to individuals, organizations, and governments. Its open-source nature, customizability, stability, security, vast software repository, high performance, and strong community support contribute to its widespread adoption. Linux has proven its mettle in a wide range of applications, from personal computing to enterprise-level systems. As technology continues to evolve, Linux remains at the forefront, driving innovation and empowering users with a robust and efficient operating system. By embracing Linux, users unlock a world of possibilities and join a global community that champions collaboration, knowledge sharing, and the principles of open-source software.

RESOLVED. Gradual rollout for governor-mysql is paused

0
RESOLVED. Gradual rollout for cagefs #7.6.14-1 is paused

The gradual rollout of the rollout slot-5 for the package governor-mysql@1.2-111, governor-mysql@1.2-112 was paused due to issues with installing governor-mysql@1.2-112 , which fixes the CLOS-2705 bug.

We are planning to fix the issue and resume the slot on the 17th of June.
We apologize for the inconvenience.
 
Update 17.06.2024
governor-mysql@1.2-112 is available for further installations.

 

How to Find the Largest Directories in Linux

0
How to Find the Largest Directories in Linux

1.7K
Finding the largest directories in a Linux system is like playing detective in a world of data. It’s not just about freeing up disk space—it’s about understanding where your resources are being used and making informed decisions about what to keep, what to archive, and what to delete.
In this blog, I’ll walk you through various ways to unearth the heavyweight directories lurking in your Linux system. Let’s dive in, and yes, we’ll sprinkle a bit of my personal spice along the way.
The du command: The traditional heavyweight
The du (disk usage) command is the old reliable of finding out how much space a directory is using. It’s like that old detective coat you can’t part with—never fails you. Here’s how to use it:
Basic usage
du -sh /path/to/directory

The -s option gives you a summary instead of listing every file, and -h makes the output human-readable (who really counts bytes these days?).
Finding the top offenders
But what if we want to find the largest directories within a certain path? Enter the sorting magic:
du -h /path/to/directory | sort -rh | head -n 10

This command will list the top 10 heavyweight directories in descending order. The sort command is our friend here, with -r reversing the order (because we want the largest, not the smallest) and -h handling human-readable numbers (yes, it knows that 1M is larger than 1K).
Output
1.5G /path/to/directory/subdir1
1.2G /path/to/directory/subdir2
800M /path/to/directory/subdir3

Ah, the satisfaction of seeing exactly where your disk space has gone. Priceless. Let me give you a practical example. To list the sizes of directories in the current working directory and sort them to find the largest, you can use the following command:
du -h –max-depth=1 | sort -rh | head -n 5
Example:
1.5G ./dir1
1.2G ./dir2
800M ./dir3
600M ./dir4
450M ./dir5
The graphical approach: For the visually inclined
Not everyone loves the terminal (though I can’t imagine why). For those who prefer a graphical interface, there’s Baobab, the Disk Usage Analyzer in Ubuntu.
baobab command usage in linux

Simply open it from your applications menu, and you’ll be presented with a visual breakdown of your disk usage. It’s like a map to treasure, but instead of treasure, it’s data you probably forgot about.
While I personally prefer the terminal (there’s something about text output that just feels more like hacking), I can’t deny the appeal of seeing your data usage laid out in colorful graphs.
The ncdu command: A modern twist
For those who want a bit of both worlds—terminal-based but with an easy-to-navigate interface—ncdu (NCurses Disk Usage) is the answer. It’s like du went to the gym and came back with a new UI.
Installation steps:
Debian/Ubuntu and derivatives
sudo apt-get update
sudo apt-get install ncdu

Fedora
sudo dnf update
sudo dnf install ncdu

CentOS/RHEL
For CentOS or RHEL 7 and below, you might need the EPEL repository:
sudo yum install epel-release
sudo yum update
sudo yum install ncdu

For CentOS/RHEL 8 and newer versions, use dnf instead:

sudo dnf install epel-release
sudo dnf update
sudo dnf install ncdu

Arch Linux
sudo pacman -Syu
sudo pacman -S ncdu

openSUSE
sudo zypper refresh
sudo zypper install ncdu

Alpine Linux
sudo apk add ncdu

Gentoo
sudo emerge –update –newuse ncdu
And run it with:
ncdu /path/to/directory

You’ll be greeted with a navigable interface showing your directories, their sizes, and you can even delete files from within ncdu. Be careful, though; with great power comes great responsibility.
Example:

FAQ: Finding the largest directories in Linux
Can I use the du command on the entire system?
Yes, you can, but be prepared to wait because it’ll take some time. Use du -sh / to see the total disk usage of your root directory, but remember, this scans everything, so it might take a while.

How can I exclude a specific directory when using du?
If you want to exclude a directory, you can use the –exclude flag. For example, du -sh –exclude=/path/to/exclude /path/to/directory will calculate the size of /path/to/directory without including the specified excluded path.
Is ncdu better than du?
“Better” is subjective. ncdu offers a more user-friendly interface and interactive usage, making it easier for some users to navigate and manage files directly. du, on the other hand, is straightforward and perfect for quick, scriptable commands. Your preference will depend on your needs and how you like to work.
Can Baobab scan network locations?
Yes, Baobab can scan network locations, but the performance and accuracy can depend on the network’s stability and the remote file system’s characteristics. Use File > Connect to Server in Baobab to add a network location.
What does the sort -rh command do?
The sort -rh command sorts the output in human-readable format (-h, where 1K is less than 1M) and in reverse order (-r), ensuring that larger sizes appear at the top of the list.
How do I install ncdu if it’s not available on my system?
If ncdu is not already installed, you can install it using your distribution’s package manager. For Ubuntu or Debian-based systems, the command is sudo apt-get install ncdu. For Red Hat-based systems, use sudo yum install ncdu.

Can I find the largest files instead of directories?
Absolutely. While the methods mentioned focus on directories, you can find large files using the find command. For example, find /path/to/search -type f -exec du -h {} + | sort -rh | head -n 10 will list the top 10 largest files in the specified path.
How can I monitor disk usage over time?
To monitor disk usage over time, you might need to use additional tools or scripts. One simple approach is to run a du command periodically (via cron jobs, for example) and save the output to a file for later analysis. There are also more sophisticated monitoring solutions available that can track disk usage trends.
Conclusion
There are many ways to find the largest directories in your Linux system, each with its own charm. I personally lean towards ncdu for its balance of power and usability, but I’ve been known to run a quick du command just for the nostalgia.
So, go ahead, play detective with your file system. You might be surprised at what you find hiding in the depths of your directories.

Linux Scoop — Ubuntu 22.04 Customization

0
image

Hey Ubuntu enthusiasts! 🐧✨ Dive into the ultimate customization experience with Ubuntu 22.04 Version 3.0! 🌈 Here’s a glimpse of the magic we’ve woven into our desktop:🎨 Everforest Color Scheme Magic: Witness seamless application of the Everforest color scheme across themes, icons, and cursors, giving your Ubuntu a fresh and vibrant look!🖥️ Desktop Layout: Optimized for productivity! A sleek single panel at the bottom with the main menu in the center—effortless navigation at your fingertips!⌛ Conky Widget Awesomeness: Stay in the loop with real-time updates on clock, CPU, RAM, weather, network, and the current audio or music track, thanks to our Conky widget on the desktop.🔧 Additional Setups:🔄 ZSH and Powerlevel10k: Elevate your terminal game!🌈 Everforest GNOME Terminal: Bringing color harmony to your command line.🦊 Everforest Firefox Theme: A consistent aesthetic across your browser.🚀 Flatpak Applications: Streamlined installation for your favorite apps.📊 Command-line Apps: Enhance your Ubuntu experience with cava, htop, cmatrix, and neofetch.🎨 Customize Plymouth: Make your boot experience as unique as you are!📥 Download Resource Files and Documentation: Resource Files: Download Here for Documentation: Check it out🎬 Watch the Final Result and Tutorials:Customize your Ubuntu 22.04 like never before! 🚀✨ Share your personalized desktops with us using #UbuntuCustom3_0. Let’s make Linux uniquely yours! 🌟💻

How To Turn Your Current System To An Installable ISO (For Debian, Ubuntu, Arch Linux and Manjaro)

0
How To Turn Your Current System To An Installable ISO (For Debian, Ubuntu, Arch Linux and Manjaro)

penguins-eggs is a command line tool to turn your current Debian, Ubuntu, Arch Linux or Manjaro system to a redistributable live ISO image. Debian / Ubuntu flavors are also supported (so you can also use this for Xubuntu, Kubuntu, etc.), as well as Linux distributions based on these, like Devuan, Linux Mint and elementary OS.Using this, you can create an installable live ISO with your Debian / Devuan / Ubuntu-based, Arch Linux or Manjaro system, and include all installed applications as well as your home folder (personal files, configurations, etc.). If you’re not creating the ISO for you, but instead you want to redistribute it, Eggs can completely remove the user and system data from the generated ISO.A live ISO generated from my laptop using penguins-eggs; in the screenshot you can see the Calamares graphical installerThe live ISO image created by Eggs can be installed using a graphical user interface (Calamares) or from the command line, using a TUI tool especially created for penguins-eggs, called krill. This command line installer includes support for unattended installations.Eggs also has various advanced features, like the ability to set the generated ISO to install without an Internet connection (see eggs help tools yolk for details), a script mode to generate scripts to manage the ISO, addons, set the theme for the livecd and Calamares installer (images), and more. There’s also “penguins-wardrobe”, a repository with YAML and Bash scripts used by Eggs to customize Linux systems starting from a minimal (naked) installed CLI system.It’s worth noting that as far as I know, penguins-eggs is the only real alternative to the now defunct remastersys which could create a customized live ISO of Debian, Ubuntu and derivatives, as well as back up an entire Debian / Ubuntu system, including user data, to an installable live ISO.Below, you’ll find a quick guide on how to remaster your current system and redistribute it as a live ISO file (with or without user and system data). Please note that I’ve only tested this on Ubuntu because time is not on my side right now 😀️.You might also like: How To Customize Ubuntu Or Linux Mint Live ISO With CubicHow to turn your current Debian, Ubuntu or Arch Linux system to an installable live ISO[[Edit]] This does not currently work on Manjaro due to a bug!1. Install penguins-eggsOn Debian, Devuan, Linux Mint, elementary OS, Ubuntu and its flavors (Xubuntu, Ubuntu MATE, Kubuntu, etc.), you can download the latest penguins-eggs DEB from Sourceforge. Or, if you prefer to add the penguins-eggs APT repository, to receive updates for this tool, add the repository, then install penguins-eggs using the following commands:sudo apt install curl #in case it’s not installedcurl -fsSL https://pieroproietti.github.io/penguins-eggs-ppa/KEY.gpg | sudo gpg –dearmor -o /etc/apt/trusted.gpg.d/penguins-eggs.gpgecho “deb [arch=$(dpkg –print-architecture)] https://pieroproietti.github.io/penguins-eggs-ppa ./” | sudo tee /etc/apt/sources.list.d/penguins-eggs.list > /dev/nullsudo apt updatesudo apt install eggsYou might also be interested in: apt-key Is Deprecated. How To Add OpenPGP Repository Signing Keys Without It On Debian, Ubuntu, Linux Mint, Pop!_OS, Etc.On Arch Linux and Manjaro, you can install penguins-eggs from AUR.2. (Optional) Install Calamares if you want to use a graphical installer for the live ISO (without this, you can only use the TUI installer).Note that this does not currently work on Arch Linux / Manjaro.Install Calamares using:sudo eggs calamares –install3. Start the live ISO creationNotes before starting the ISO creation:Besides the options (arguments) specified for the commands below, you can also change the live ISO username and password (when not saving user data), timezone, and more, by editing the /etc/penguins-eggs.d/eggs.yaml as root with a text editorIf you plan on installing the generated ISO unattended (so using krill, the command-line Eggs ISO installer), edit the installation details in the /etc/penguins-eggs.d/krill.yaml fileTo start creating a live ISO from your current system WITHOUT user data, with the ISO filename <NAME>-[arch]-YYYY-MM-DD_HHMM.iso, and standard compression, use:sudo eggs produce –basename <NAME> –standardInstead of standard (–standard) compression, you could use maximum (–max) compression, which creates a smaller ISO file size but takes more time to build.The default username used by the live ISO in this case is live, and the password is evolution. The root password is the same, evolution.To start creating a live ISO from your current system WITH UNENCRYPTED user data, with the ISO filename <NAME>-[arch]-YYYY-MM-DD_HHMM.iso, and standard compression, use:sudo eggs produce –clone –basename <NAME> –standardYou can also create a live ISO from the current system WITH ENCRYPTED user data (user data is saved encrypted in a LUKS volume inside the live system; the data is not accessible on the live ISO, but is restored when installing the system using the TUI installer; the user data cannot be restored when using the graphical installer – Calamares), with the ISO filename <NAME>-[arch]-YYYY-MM-DD_HHMM.iso, and standard compression:sudo eggs produce –cryptedclone –basename <NAME> –standardOnce the ISO file has been created, you’ll find it in /home/eggs/.For more on penguins-eggs, check out its documentation.thanks to u/sudo_nick