Home Blog Page 13

Revive an Expired Puppet CA with Certregen | Lisenet.com :: Linux | Security

0
Lisenet.com :: Linux

Renewing an expired Puppet CA certificate using Certregen module.
The Problem
I’ve been involved in a project of migrating ageing infrastructure (e.g. CentOS 7) and legacy applications (e.g. MySQL 5.7) to modern software. One of the first problems was an old installation of Puppet Server v5 where its CA certificate has already expired.
$ rpm -qa | grep puppet
puppet5-release-5.0.0-14.el7.noarch
puppet-agent-5.5.22-1.el7.x86_64
puppetserver-5.3.16-1.el7.noarch

Puppet’s CA certificate is only valid for a limited time which is usually 5 years, after which it expires. When this CA expires, Puppet’s services will no longer accept any certificates signed by that CA, and such Puppet infrastructure will immediately stop working.
The Solution
Leaving aside the fact that Puppet v5.5 is EOL, we needed to bring the system back to a working state. This meant regenerating the CA certificates.
Puppetlabs provides a certregen module that allows one to regenerate and redistribute Puppet CA certificates and refresh CRLs, without invalidating certificates signed by the original CA. It can also revive a Puppet CA that has already expired.
Working with Certregen
Installation
Install the Puppet module puppetlabs-certregen:
# puppet module install puppetlabs-certregen
Notice: Preparing to install into /etc/puppetlabs/code/environments/production/modules …
Notice: Downloading from https://forgeapi.puppet.com …
Notice: Installing — do not interrupt …
/etc/puppetlabs/code/environments/production/modules
└─┬ puppetlabs-certregen (v0.2.0)
└── puppetlabs-stdlib (v4.25.1)

Check for Expired Certificates
We can see that the CA certificate’s status is “expired”.
# puppet certregen healthcheck
“ca” (SHA256) 11:8B:52:F2:E8:CB:66:42:43:C3:51:9A:6E:3D:26:83:4F:69:17:B6:4B:A2:73:1B:26:44:AC:A0:16:01:7C:9F
Status: expired
Expiration date: 2024-03-11 14:35:39 UTC

“puppet.example.com” (SHA256) 11:36:8F:20:BB:3D:1C:5B:D9:1D:55:68:D9:CC:0D:D4:3A:E6:C4:0E:8B:02:32:E6:72:D4:F6:D1:07:10:47:E1
Status: expiring
Expiration date: 2024-03-31 16:39:25 UTC
Expires in: 17 days, 9 hours, 5 minutes, 55 seconds

“ip-10-10-10-18.eu-west-1.compute.internal” (SHA256) 11:39:B9:1E:7B:A3:EC:28:3A:E8:C0:77:58:96:3F:12:C6:39:04:54:DC:CF:56:54:25:63:B2:DA:19:50:D1:90
Status: expiring
Expiration date: 2024-03-31 17:07:45 UTC
Expires in: 17 days, 9 hours, 34 minutes, 15 seconds

[OUTPUT TRUNCATED]

Generate a New CA Certificate
We want to generate a new CA certificate using the existing CA keypair. We do not want to create a new keypair. We also want to automatically update the expiration date of the certificate revocation list (CRL).
# puppet certregen ca –ca_serial 01
Notice: Backing up current CA certificate to /etc/puppetlabs/puppet/ssl/ca/ca_crt.1710401711.pem
Notice: Signed certificate request for ca
CA expiration is now 2029-03-13 07:35:11 UTC
CRL next update is now 2029-03-13 07:35:11 UTC

Distribute the New CA Certificate
Distribute the new CA cert to every node in your Puppet infrastructure. This depends on how your environment has been set up.
In our case we used a regular user account with sudo privileges to copy files using SCP.
$ for i in $(cat list_of_puppet_agent_servers.txt);
do
scp ./ca.pem ${i}:~/
ssh ${i} “sudo mv ca.pem /etc/puppetlabs/puppet/ssl/certs/ca.pem; sudo chown root: /etc/puppetlabs/puppet/ssl/certs/ca.pem”
done

References
https://github.com/puppetlabs-toy-chest/puppetlabs-certregen

How To Use Percona Xtrabackup To Create A MySQL Slave

0
Xtrabackup MySQL Replication

Percona Xtrabackup can be used to create “hot backups” of MySQL servers fairly quickly and can avoid some of the pitfalls of mysqldump. Xtrabackup can be setup to use backup locks instead of read locks which is much less invasive, this is available on innodb tables. MyISAM and other tables will still need to be read locked to perform a backup.  Xtrabackup will work on MySQL, MariaDB and Percona (versions 5.1, 5.5, 5.6, 5.7).  You can then use this backup to easily replicate the MySQL slave. This guide assumes you already have a running SQL server and CentOS.Install Percona XtrabackupFirst you want to install the repository, you can get additional repositories from Percona‘s siteyum install -y http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpmThen install Percona Xtrabackupyum install -y percona-xtrabackup-24Install rsync as well as we will use this to transfer the backup to the slaveyum install -y rsyncCreate A Backup with XtrabackupFirst you will want to create a initial backup with innobackupexinnobackupex /root/percona-backupReplace /root/percona-backup with path if you need to. If you need to specify a user and password use the following flagsinnobackupex –user=username –password=passwordhere /root/percona-backupWhen it starts you should see the following messageinnobackupex: Starting the backup operationUpon completion you should see the followingxtrabackup: Transaction log of lsn (1597945) to (1597945) was copied.
170810 21:37:19 completed OK!If you look in the /root/percona-backup directory you set, you will see a time-stamp directory, this is the SQL backup# ls /root/percona-backup/
2017-08-10_21-37-17You will now need to apply the transaction log to the backup by performing the followinginnobackupex —apply-log /root/percona-backup/2017-08-10_21-37-17You will again be looking for the ‘OK’ message at the end of the programxtrabackup: The latest check point (for incremental): ‘1597945’
xtrabackup: Stopping log copying thread.
.170810 21:44:58 >> log scanned up to (1597945)

170810 21:44:58 Executing UNLOCK TABLES
170810 21:44:58 All tables unlocked
170810 21:44:58 Backup created in directory ‘/root/percona-backup/2017-08-10_21-37-17/2017-08-10_21-44-56/’
170810 21:44:58 [00] Writing /root/percona-backup/2017-08-10_21-37-17/2017-08-10_21-44-56/backup-my.cnf
170810 21:44:58 [00] …done
170810 21:44:58 [00] Writing /root/percona-backup/2017-08-10_21-37-17/2017-08-10_21-44-56/xtrabackup_info
170810 21:44:58 [00] …done
xtrabackup: Transaction log of lsn (1597945) to (1597945) was copied.
170810 21:44:59 completed OK!Prepare the SQL MasterOn the master server, if it is not already prepared for slaves, you will want to get it ready. You will want to make sure a bin log is configured in my.cnfnano /etc/my.cnfAdd the followinglog_bin = /var/log/mysql/mysql-bin.logAlso set a server-id in /etc/my.cnfserver-id = 1The server-id can be any number it just needs to be unique on each server in the replication.You will need to restart MySQL for it to take affectsystemctl restart mysqlYou will also want to ensure the firewall is open for the port 3306firewall-cmd –zone=public –add-service=mysql –permanent
firewall-cmd –zone=public –add-port=3306/tcp –permanentThen reload the firewallfirewall-cmd –reloadPrepare the SQL SlaveYou will want to add a server-id to /etc/my.cnf on the slaveserver-id=2You will also want to open the same port in the firewall as you did the on the master. Since we are copying the entire Xtrabackup you can go ahead and shutdown MySQL on the Slavesystemctl stop mysqlGo ahead and make a backup of /var/lib/mysql incase something goes wrongcp -R /var/lib/mysql /var/lib/mysql.bakYou will also want to ensure the firewall is open for the port 3306 on the slave as wellfirewall-cmd –zone=public –add-service=mysql –permanent
firewall-cmd –zone=public –add-port=3306/tcp –permanentThen reload the firewallfirewall-cmd –reloadConfigure MySQL ReplicationSync the backup from the master to the slaversync -vPa /root/percona-backup/2017-08-10_21-37-17 192.168.1.100:/var/lib/mysqlYou will want to replace the path with the updated path to your backup and the  192.168.1.100 ip address with  IP address of your slave.Once the rsync has finished you will want to change the permissions of /var/lib/mysql to be owned by the slavechown -R mysql. /var/lib/mysqlOn the master enter the mysql console:mysqlGrant permissions for replication to the slavegrant replication slave on *.* to [email protected] identified by ‘password’;You will want to change the replicationuser to a desired user name, 192.168.1.100 to the IP address of the slave and password to your desired password.Then flush privileges on the masterflush privileges;Start The MySQL SlaveOn the slave you will need to gather the needed slave information so cat /var/lib/mysql/xtrabackup_binlog_info
mysql-bin.000001 245This contains the MASTER_LOG_FILE and MASTER_LOG_POS information you will need to enter in MySQL to connect to the masterConnect to MySQL on the slavemysql -uroot -pEnter the following informationCHANGE MASTER TO MASTER_HOST=’192.168.1.101′,
MASTER_USER=’replicationuser’,
MASTER_PASSWORD=’password’,
MASTER_LOG_FILE=’mysql-bin.000001′,
MASTER_LOG_POS= 245;MASTER_HOST will be the IP address of the MySQL master.MASTER_USER will be the replicationuser you set earlier on the master.MASTER_PASSWORD will be the password you configured earlier on the master.MASTER_LOG_FILE  and MASTER_LOG_POS will be obtained from the xtrabackup_binlog you viewed earlierOnce you have added that information in MySQL you should see the followingMariaDB [(none)]> CHANGE MASTER TO MASTER_HOST=’192.168.1.101′,
-> MASTER_USER=’replicationuser’,
-> MASTER_PASSWORD=’password’,
-> MASTER_LOG_FILE=’mysql-bin.000001′,
-> MASTER_LOG_POS= 245;
Query OK, 0 rows affected (0.04 sec)You can go ahead and start slaving now:start slave;To check slave status perform type show slave status;MariaDB [(none)]> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.101
Master_User: replicationuser
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 483
Relay_Log_File: mariadb-relay-bin.000002
Relay_Log_Pos: 767
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 483
Relay_Log_Space: 1063
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
1 row in set (0.00 sec) The specific lines you are looking for to ensure replication is working areSlave_IO_Running: Yes
Slave_SQL_Running: YesThat is all that is needed for creating MySQL replication with Perconna Xtrabackup.Aug 13, 2017LinuxAdmin.io

Understanding the basics of Lambda Function URLs

0
Understanding the basics of Lambda Function URLs

Published: October 21, 2023 | Modified: October 21, 2023

In this guide, we’ll take you through the fundamental concepts of Lambda Function URLs. We’ll discuss their definition, explore their applications, and address security considerations, providing a comprehensive overview.

What is the Lambda Function URL?

It’s a dedicated, unique, and static URL for your Lambda function, enabling remote invocation of the backend Lambda function over the network call. This straightforward and budget-friendly method simplifies Lambda function invocation, bypassing the need for managing complex front-end infrastructure like API Gateway, Load Balancers, or CloudFront. However, this comes at the expense of advanced features provided by these services.

It follows the format:

https://<url-id>.lambda-url.<region>.on.aws

Why to use Lambda Fuction URL?

Creating them is quite straightforward and simple. The AuthType (security) is the only configuration you need to provide. CORS config is optional.

They come at no additional cost.

Once configured, they require minimal maintenance.

For straightforward use cases, they can replace the need for designing, managing, and incurring the costs of front-end infrastructure, such as API Gateway.

They are most appropriate for development scenarios where you can prioritize other aspects of applications/architecture over the complexity of Lambda invocation methods.

When to use Lambda Function URLs?

Lambda Function URLs serve a valuable role in accelerating the testing and development of the application, by prioritizing Lambda invocations in the application’s progress, while the method of invocation takes a backseat.

In production, they’re practical when your design doesn’t necessitate the advanced features provided by alternative invocation methods like API Gateway or Load Balancers, etc.

These URLs are also beneficial when dealing with a limited number of Lambdas, offering a simple, cost-effective, and maintenance-free approach to invocations.

How to secure Lambda Function URLs?

You can manage access to Lambda Function URLs by specifying the AuthType, which offers two configurable options:

AWS_IAM: This allows you to define AWS entities (users or roles) that are granted access to the function URL. You need to ensure a proper resource policy is in place allowing intended entities access to Action: lambda:InvokeFunctionUrl

NONE: Provides public, unauthenticated access. Use this option cautiously, as it allows unrestricted access. When you choose this option, Lambda automatically creates a resource-based policy with Principal: * and Action: lambda:InvokeFunctionUrl and attaches to function.

It’s important to remember that Lambda’s resource-based policy is always enforced in conjunction with the selected AuthType. Please read this AWS documentation for more details.

The Lambda resource policy can be configured at Lambda > Configuration > Permissions > Resource-based policy statements.

With the basics of Lambda Function URLs in mind, refer to how to create Lambda Function URL and kick-start your journey with them!

KDE Drives Fixes Into Its Triple Buffering, Adds Konsole Feature To Save Terminal Output

0
KDE Drives Fixes Into Its Triple Buffering, Adds Konsole Feature To Save Terminal Output

In addition to refining the KDE Human Interface Guidelines, KDE developers have been busy with a variety of other tasks this week in polishing their open-source desktop stack.
KDE developer Nate Graham is out with his usual weekend post that summarizes all of the interesting KDE changes made for the week. This week some of the most prominent KDE changes include: - The Konsole terminal emulator can now save all output in a terminal view to a file in real-time. - Distribution vendors can now customize the default set of favorite apps across Kickoff, Kicker, and the Application Dashboard beginning in Plasma 6.2. - The KDE Info Center now has a page showing detailed memory information where available. - Plasma 6.1.4 and beyond will ensure KWin when opening a window where the minimum height is taller than the screen will position the titlebar so that it’s visible rather than being cut-off. - Fixing stuttering and other problems within the KWin triple buffering feature. - Various other bug fixes and UI refinements. More details on the KDE changes this week via Nate’s blog.

Stargazers Ghost Network, una rete di 3000 account GitHub che hanno un solo ed unico scopo: diffondere malware in forma distribuita

0
Stargazers Ghost Network

Se la notizia che stiamo per raccontare rappresenti qualcosa di inedito lo lasciamo decidere al lettore, ma certamente dopo aver parlato dei commenti di GitHub che diffondevano malware mediante link all’apparenza certificati, anche questa nuova modalità di diffusione va ad aggiungersi alla lista dei metodi “creativi”.

A raccontare la vicenda è Bleeping Computer che riprende la ricerca dal titolo Stargazers Ghost Network pubblicata da Check Point e spiega come sia stato creato quello che è stato battezzato come Distribution-as-a-Service (DaaS) per i malware da parte di un attore non ben definito di nome “Stargazer Goblin”, per l’appunto.

La rete di distribuzione denominata Stargazers Ghost Network è formata da repository GitHub insieme a siti WordPress compromessi che pubblicano archivi protetti da password contenenti malware. Nella maggior parte dei casi, questi malware sono nomi ben conosciuti, ma la parte inedita come dicevamo è quella relativa alla modalità.

Se vogliamo anche il numero è particolarmente significativo: si sta parlando infatti di 3000 account GitHub i quali, come accaduto anche nella sopracitata “truffa dei commenti”, fanno affidamento sul basso livello di attenzione riservato dagli utenti una volta che nell’indirizzo vedono il dominio github.com.

Come descrive l’articolo, i repository che fanno parte di Stargazers Ghost Network cambiano le regole del gioco fornendo per il repository dannoso un contrassegno ed una verifica fornita da più account GitHub, che ne supporta la legittimità.

Di fatto si tratta di repository che hanno stelle GitHub:

repository che hanno stelle GitHub

Poi vai a spiegare che quelle 62 stelle sono tutte dei fake che servono solo per legittimare un contenuto anomalo… Tendenzialmente quando (e se) la cosa si realizza è sempre troppo tardi.

L’altra cosa particolare di questa rete è la modalità con cui si auto alimenta, perché logicamente a qualcuno viene da domandarsi “ma non basta chiudere quegli account malevoli?”, e la risposta del perché non è così semplice è racchiusa in questo altro schema:

il processo di manutenzione e ripristino della rete

Quello che viene mostrato è il processo di manutenzione e ripristino della rete, che sembra essere automatico, poiché rileva account e repository bannati e riparandoli quando necessario. L’utilizzo di ruoli account diversi assicura che ci siano solo danni minimi quando e se GitHub interviene contro account o repository che hanno violato le sue regole.

Certo seppur la modalità sia particolare e ben articolata, non cambia la sostanza delle cose: alla fine il veicolo della maggior parte delle infezioni rimane l’elemento che sta tra la sedia e la tastiera, perché su quei link qualcuno ci clicca, e senza porsi molti problemi.

Raoul Scarazzini

Da sempre appassionato del mondo open-source e di Linux nel 2009 ho fondato il portale Mia Mamma Usa Linux! per condividere articoli, notizie ed in generale tutto quello che riguarda il mondo del pinguino, con particolare attenzione alle tematiche di interoperabilità, HA e cloud.E, sì, mia mamma usa Linux dal 2009.

Fonte: https://www.miamammausalinux.org/2024/07/stargazers-ghost-network-una-rete-di-3000-account-github-che-hanno-un-solo-ed-unico-scopo-diffondere-malware-in-forma-distribuita/
Visited 1 times, 1 visit(s) today

Sparky news 2024/04 – SparkyLinux

0
France

The 4th monthly Sparky project and donate report of the 2024:– Linux kernel updated up to 6.8.8, 6.6.29-LTS, 6.1.89-LTS & 5.15.157-LTS– added autopartitoning to the Sparky CLI Installer – Sparky testing (8) only– added to repos: 64gram – an unofficial Telegram client app & Telegram amd64
Due to nala doesn’t full-upgrade packages properly and doesn’t handle with broken packages on Sparky rolling (8) I recommend to uninstall nala. The sparky-upgrade tool on Sparky 8 uses apt back now to upgrade packages.
A worldwide SF1 (SourceForge) package mirror repos has been activated.
There are also 3 new package repo and ISO mirror servers available for downloading iso images and installing packages, located: US2 in USA, SI1 and SI2 in Singapore, Asia.All the 3 servers are provided thanks to Astian, Inc.: https://sparkylinux.org/partners/
Check the Sparky Wiki’s mirror page: https://wiki.sparkylinux.org/doku.php/mirrors
We invite companies and organizations that would like to support our project and join the group of SparkyLinux sponsors.
Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and us alive.
Don’t forget to send a small tip in May too, please.

Country

Supporter

Amount

Antoine B.

€ 15

Guillermo C.

PLN 227

Keith K.

$ 10

Galen T.

$ 1.47

Kaveh

$ 10.79

Sharon D.

$ 5

Laura T.

$ 5.40

Wojciech H.

PLN 2

Grzegorz P.

PLN 20

Grzegorz K.

PLN 1

Olaf T.

€ 10

Rafał Z.

PLN 50

Klarita H.

€ 99

Mariusz S.

PLN 169.68

Henryk K.

€ 25

Krzysztof T.

PLN 250

Andrzej J.

PLN 100

Paweł S.

PLN 47.66

Andrzej P.

PLN 20

Marek B.

PLN 10

Alexander F.

€ 20

Rudolf L.

€ 10

Piotr M.

PLN 300

Karl A.

€ 1.66

Matt M.

€ 8.25

Ralf A.

€ 10

Stanisław G.

PLN 50

Jarosław G.

PLN 50

Maciej S.

PLN 50

Jorg S.

€ 5

Mateusz G.

PLN 25

Fujita K.

PLN 49.66

Sean D.

€ 5

Andrzej M.

PLN 25

Total:

58%

In glance:

€ 208.91PLN 1447$ 32.66mBTC 0

* Keep in mind that some amounts coming to us will be reduced by commissions for online payment services. Only direct sending donations in PLN to our Polish bank account will be credited in full.* Miej na uwadze, że kwota, którą przekażesz nam poprzez system płatności on-line zostanie pomniejszona o prowizję dla pośrednika. W całości wpłynie tylko ta, która zostanie przesłana bezpośrednio na nasze polskie konto bankowe w PLN.

Facebook
Twitter
Reddit
Tumblr

Linux Scoop — Exploring the Latest in MX Linux 23

0
Linux Scoop — Exploring the Latest in MX Linux 23

Introduction to MX Linux 23: MX Linux 23 is built upon the solid foundation of Debian 12 ‘Bookworm,’ promising a stable and reliable experience. But that’s just the beginning. The real magic lies in the new and improved features that await you.A Fresh Look at Installation: In this video, we’ll guide you through the installation process, showcasing the support for ‘swapfiles’ and how MX Linux 23 streamlines your setup. Whether you’re a seasoned Linux user or a newcomer, you’ll find something to love.Discovering the Desktop Experience:MX Linux 23 offers a diverse range of desktop environments, including Xfce 4.18, KDE Plasma 5.27, and Fluxbox 1.3.7. We’ll explore the visual appeal, functionality, and performance of each, helping you choose the perfect one for your needs.Enhanced MX Tools Suite:The MX Tools suite has received significant updates, making system management and customization a breeze. We’ll walk you through the key tools that empower you to tailor your MX Linux environment.Effortless Software Management: MX Linux 23 simplifies software installation and updates with its robust package manager. We’ll demonstrate how this distro makes managing your software a hassle-free experience.Unveiling MX Service Manager:Stay tuned to discover the latest addition, the MX Service Manager, and how it gives you more control over system services, ensuring your system behaves exactly as you want it to.Don’t miss out on MX Linux 23, a distro that combines the stability of Debian with cutting-edge features.

Can I Run Kodi on Linux?

0
Can I Run Kodi on Linux?

Kodi is a popular open-source media server application for various platforms. It lets you organize different types of content and also access streaming services.
Originally, it came out as a media center app for the first Xbox console. Presently, you can install the software on a devices with almost all operating systems.
We will focus more on Kodi on Linux in this post. Linux users can read this article to discover how to install and run Kodi on their operating system.

What Do You Need to Run Kodi On Linux
To run Kodi on Linux, you must ensure that your system fulfills specific requirements. These include:
CPU: You need an x86-64 or x86 processor. Some examples of these include Intel, Pentium 4, and AMD Athlon 64.
RAM: A RAM of 1 GB or more is needed for running Kodi on an HTPC media player device. But if you use a system for multi-purpose usage, you need 2GB or more RAM.
Graphics: Kodi runs well on all graphic cards launched in the preceding 10 years. If you use cards from AMD/ATI or Intel, use those with a Mesa 11.3 or later.
Video decoding: You must ensure that your VPU or GPU supports VDPAU or VAAPI. If you have an older Nvidia or AMD cards, VAAPI is recommended.
Drive space: Kodi takes up anywhere between 100 to 200 MB space. If your hardware is compatible with net booting, there’s no need of internal storage for Kodi. Generally speaking, you need a minimum of 4 to 8 drive space for Kodi in your Linux device.
How to Set up Kodi on Ubuntu OS on Linux?
You can easily set up the latest versions of Kodi on Ubuntu Linux. Follow this process.

Use the shortcut Ctrl + Alt + T to launch a terminal in Ubuntu.
To add the official Personal Package Archives (PPA) repository of Kodi, use the command sudo add-apt-repository ppa:team-xbmc/ppa.
Now, update the package cache. This will allow your system to get the packages from the latest software repository.
To do so, use the command sudo apt update.
Now is the time to install Kodi via the following command: sudo apt install Kodi.
This command will also upgrade a prior Kodi version if you have it.
After Kodi has been installed, open it by navigating to the menu.
Now, either find your preferred media in the library or access any streaming service.

POINT TO NOTE: This process to run Kodi on Linux is also applicable for Linux Mint, Deepin Linux, Pingui OS, and other Ubuntu-based Linux distributions.
Installing a VPN on Kodi
You are now ready to stream content through Kodi. But if you want to enjoy a safe and secure streaming experience, using a VPN is a must. A VPN encrypts or codes your web activities and data.
It does so in such a way that any external entity cannot decode it. VPN masks the user’s traffic and the IP address.
So, anyone trying to intercept your connection or steal your sensitive data in any way cannot do so.
It is critical to install a VPN before using Kodi on Linux. Various Kodi add-ons give you access to copyrighted content.
If caught, it can put you at risk. Also, many Kodi add-ons containing great content are restricted to specific locations. You cannot watch this content without bypassing the geo-restrictions.
A VPN helps you unblock geo-restrictions and access regional content easily.
You cannot install & use any VPN on Kodi. The VPN you install should be compatible with Kodi on Linux.
You can find a list of the best VPNs for Kodi on Firesticktricks.com.
These VPNs have thousands of servers across multiple countries. So, they enable you to access regional content with ease.
They are also equipped with various strong security features, such as split tunnelling, kill switch, and DNS leak protection.
Thus, by using any of them with Kodi on Linux, you can make way for a safer and more exciting streaming and gaming experience.
Final Thoughts
Kodi is a wonderful platform that lets you access a whole new world of content. The media player supports most major operating systems, including Linux.
But it is essential to use it with a VPN if you want to keep your privacy and security intact. Use the method above to set up Kodi on Linux, install suitable add-ons, and tap into a pool of new content.
A man with a tech effusive who has explored some of the amazing technology stuff and is exploring more. While moving towards, I had a chance to work on Android development, Linux, AWS, and DevOps with several open-source tools.

How to Install Hestia Control Panel on Ubuntu and Debian

0
How to Install Hestia Control Panel on Ubuntu and Debian

Hestia Control Panel (HestiaCP) is a free web hosting tool for Linux that offers both a web and command-line interface to easily manage domain names, websites, email accounts, and DNS zones.
In this article, we will guide you through the process of installing HestiaCP on Ubuntu 22.04 LTS and Debian 12.
Prerequisites
Before we begin, make sure you have the following:

A fresh Ubuntu or Debian server with a minimum of 4 GB RAM.
A valid domain name pointing to your server’s IP address.

Step 1: Install Required Packages
First, update your server’s package list and upgrade all the installed existing packages to their latest versions.
sudo apt update
sudo apt upgrade -y

Next, set a fully qualified domain name (FQDN) as your server’s hostname and verify the hostname change.
sudo hostnamectl set-hostname yourdomain.com
hostnamectl

Next, install the required dependencies using the following command:
sudo apt install ca-certificates software-properties-common apt-transport-https gnupg wget unzip -y

Step 2: Install Hestia Control Panel
Download the latest HestiaCP installation script from the official GitHub repository using the following wget command.
wget https://raw.githubusercontent.com/hestiacp/hestiacp/release/install/hst-install.sh

Run the installation script and follow the on-screen prompts.
bash hst-install.sh

During the installation process, you’ll be prompted to confirm the installation and choose the software packages to install.
Install Hestia Control Panel
By default, Hestia installs the following:

Nginx Web/Proxy Server
Apache Web Server (as backend)
PHP-FPM Application Server
Bind DNS Server
Exim Mail Server + SpamAssassin
Dovecot POP3/IMAP Server
MariaDB Database Server
Vsftpd FTP Server
Firewall (iptables) + Fail2Ban Access Monitor.

When prompted, enter the required information:

Admin email address
FQDN hostname
MySQL root password
Confirm installation

The installation process may take some time to complete.
Hestia Installation Process
Step 3: Access Hestia Control Panel
Once the installation is complete, Hestia will provide you with the login URL, username, and password.
Hestia Installation Summary
By default, the URL will be:
https://yourdomain.com:8083
OR
https://server-ip:8083

Open this URL in your web browser. You might encounter a security warning because the SSL certificate is self-signed. Proceed by adding an exception.
Log in using the credentials provided at the end of the installation process. You’ll be taken to the Hestia dashboard, where you can start managing your server.
Hestia Control Panel Dashboard
Step 4: Secure Your Hestia Installation
For security reasons, change the default admin password immediately by navigating to Users > Admin > Edit > Change Password.
Change Hestia Admin Password
For a more secure connection, set up SSL certificates for your domains by going to Web > Your Domain > Edit > Enable Let’s Encrypt SSL.
Enable SSL on Hestia Control Panel
Keep your Hestia Control Panel up-to-date by regularly checking for updates at Settings > Updates > Check for Updates.
Hestia Control Panel Updates
Conclusion
Hestia Control Panel simplifies the management of web servers with its user-friendly interface and robust features. By following this guide, you should have Hestia installed and configured on your Ubuntu or Debian server, ready to manage your web domains, email accounts, databases, and more.
Regular maintenance, such as updating the panel and backing up data, will ensure your server runs smoothly and securely.

brename – batch renaming safely

0
PreviewQt - preview all kinds of files

brename is a practical cross-platform command-line tool for safely batch renaming files/directories via regular expression.
This is free and open source software.
Features include:

Safe – it helps you check potential conflicts and errors before it’s too late.
Supports dry run – a good habit.
Supporting Undo the LAST successful operation, like a time machine.
Overwrite can be detected and users can choose whether overwrite or leave it (-o/–overwrite-mode).
File filtering.
Supporting including (-f/–include-filters) and excluding (-F/–exclude-filters) files via regular expression.
No need to run commands like find ./ -name “*.html” -exec CMD.
Renaming submatch with corresponding value via key-value file (-r “{kv}” -k kv.tsv).
Renaming via ascending integer (-r “{nr}”).
Automatically making directoy: e.g., renaming a-b-c.txt to a/b/c.txt.
Recursively renaming both files and directories.
Cross-platform support – runs under Linux, macOS, and Windows.

Website: github.com/shenwei356/brenameSupport:Developer: Wei ShenLicense: MIT License

brename is written in Go. Learn Go with our recommended free books and free tutorials.
Return to Console Batch Renamers

Popular series

The largest compilation of the best free and open source software in the universe. Each article is supplied with a legendary ratings chart helping you to make informed decisions.

Hundreds of in-depth reviews offering our unbiased and expert opinion on software. We offer helpful and impartial information.

Replace proprietary software with open source alternatives: Google, Microsoft, Apple, Adobe, IBM, Autodesk, Oracle, Atlassian, Corel, Cisco, Intuit, and SAS.

Awesome Free Linux Games Tools showcases a series of tools that making gaming on Linux a more pleasurable experience. This is a new series.

Machine Learning explores practical applications of machine learning and deep learning from a Linux perspective. We’ve written reviews of more than 40 self-hosted apps. All are free and open source.

New to Linux? Read our Linux for Starters series. We start right at the basics and teach you everything you need to know to get started with Linux.

Alternatives to popular CLI tools showcases essential tools that are modern replacements for core Linux utilities.

Essential Linux system tools focuses on small, indispensable utilities, useful for system administrators as well as regular users.

Linux utilities to maximise your productivity. Small, indispensable tools, useful for anyone running a Linux machine.

Surveys popular streaming services from a Linux perspective: Amazon Music Unlimited, Myuzi, Spotify, Deezer, Tidal.

Saving Money with Linux looks at how you can reduce your energy bills running Linux.

Home computers became commonplace in the 1980s. Emulate home computers including the Commodore 64, Amiga, Atari ST, ZX81, Amstrad CPC, and ZX Spectrum.

Now and Then examines how promising open source software fared over the years. It can be a bumpy ride.

Linux at Home looks at a range of home activities where Linux can play its part, making the most of our time at home, keeping active and engaged.

Linux Candy reveals the lighter side of Linux. Have some fun and escape from the daily drudgery.

Getting Started with Docker helps you master Docker, a set of platform as a service products that delivers software in packages called containers.

Best Free Android Apps. We showcase free Android apps that are definitely worth downloading. There’s a strict eligibility criteria for inclusion in this series.

These best free books accelerate your learning of every programming language. Learn a new language today!

These free tutorials offer the perfect tonic to our free programming books series.

Linux Around The World showcases usergroups that are relevant to Linux enthusiasts. Great ways to meet up with fellow enthusiasts.

Stars and Stripes is an occasional series looking at the impact of Linux in the USA.