Home Blog Page 36

Configure ProFTPd for SFTP on CentOS

0
Configure Proftpd for sFTP

This is a guide on how to configure ProFTPd for SFTP sessions. Secure File Transfer Protocol (SFTP) is a secure version of FTP which transfers files via the SSH protocol. ProFTPD can be reconfigured to serve SFTP sessions vs using the default FTP protocol. This guide assumes you already have a existing ProFTPD installation. If you do not already have it installed please follow How to Install Proftpd.Edit /etc/proftpd.conf  To Enable sFTPnano /etc/proftpd.confUn-comment the following lines to load mod_sftp#LoadModule mod_sftp.c
#LoadModule mod_sftp_pam.cToLoadModule mod_sftp.c
LoadModule mod_sftp_pam.cAdd the following to the end of the configuration (outside of the <global> </global> block to run it separately)<IfModule mod_sftp.c>
SFTPEngine ON
SFTPLog /var/log/sftp.log
Port 2222
SFTPHostKey /etc/ssh/ssh_host_rsa_key
 SFTPLog /var/log/proftpd/sftp.log
SFTPCompression delayed
</IfModule>SFTPEngine – This will enable SFTPSFTPLog – This will set the log file for sftp connectionsPort – This will set the port ProFTPd will listen on for SFTP connectionsSFTPHostKey – This points to the SSH key.SFTPCompression – This sets the compression method used during transfersOpen the sFTP port in the firewallFirewalld:Enable firewall rule:firewall-cmd –zone=public –add-port=2222/tcp –permanentLoad the new firewallfirewall-cmd –reloadIptables:Enable the firewall rule:iptables -A INPUT -p tcp -m tcp –dport 2222 -j ACCEPTSave the firewall rule:iptables-save > /etc/sysconfig/iptables Restart ProftpdCentOS 7:systemctl restart proftpdCentOS 6:service proftpd restartThats all you need to do to configure ProFTPd to accept ssh connections. You should now able to connect via port 2222 using a sFTP client.Jan 14, 2018LinuxAdmin.io

Popular DNS record types you can check through your Linux device

0
Popular DNS record types you can check through your Linux device

Which are the popular DNS record types?

There are a lot of different DNS record types. Yet, some of them are more well-known. That is because they are more commonly implemented in DNS (Domain Name System) configurations. So, let’s explain a little bit more about the most popular DNS record types: 

NS record

For this record, NS stands for Name Server. Its main goal is to indicate the Authoritative DNS server for a domain name. The NS record is also one of the crucial DNS record types for achieving proper DNS configuration.

A record

This fundamental DNS record is also known as Address record. Its purpose is especially important for every domain. The A record is responsible for pointing a domain name to its IP address, more precisely IPv4 address. 

AAAA record

You could find this record called also quad A record. It is very similar to the previous one. They both link domain names to their corresponding IP addresses. However, the main difference is that the AAAA record is only used for the newer IPv6 addresses.

SOA record

The acronym SOA stands for Start of Authority. This is the first DNS record that you should add to your DNS zone. It stores important data about the DNS administrator and also crucial information about zone transfers.

MX record

The Mail eXchanger record, or shortly MX record, is another vital piece of your configuration. Its purpose is to indicate the name server responsible for receiving email messages for your domain name. If you do not have such a record available, you are going to experience difficulties in receiving emails.

PTR record

You can find this record, also called Pointer record. It has the valuable aim to point an IP address (IPv4 or IPv6) to its corresponding domain name. It serves for Reverse DNS lookup and validating that the precise IP address actually belongs to that domain name.

CNAME record

This DNS record links one domain name to another. The CNAME record shows the actual canonical domain name. Therefore, it is very useful and beneficial to use it for your subdomains. 

How to check your DNS record types through your Linux device?

There are a lot of different ways to check and see your DNS records. However, as a Linux user, you have some outstanding options to achieve this task. Here are some great commands that you can write straight into your Terminal application.

Nslookup command

The Nslookup command is simple and easy to use. For the purpose to see all of the available DNS records for your domain, type the following:

$ nslookup -type=any example.com

*Make sure to replace example.com with the domain name you want. You could also change “any” with the specific DNS record type you want to see.

Dig command

The Dig command is another option for you to check and see different DNS record types. It provides detailed information.

Simply use the following pattern:

dig example.com DNS record type

*Make sure to replace example.com with the domain name you want and DNS record type with the one you want.

Host command 

The Host command is another easy-to-use tool with a command-line interface (CLI). When you want to see a complete list with all of the DNS records for a domain and their TTL (Time-to-live) values, type the following:

host -a example.com

As a result, you are going to witness records such as A, AAAA, CNAME, MX, with their TTL values.

*Make sure to replace example.com with the domain name you want.

Linux 6.11-rc1 Released With Initial Intel Battlemage Support, AMD RDNA4 Primed

0
Linux 6.11-rc1

The Linux 6.11 merge window is over with the Linux 6.11-rc1 release now out the door.
Linux 6.11 is bringing many new features across the board. There is initial support for some Qualcomm Snapdragon X1 laptops, a lot of CPU and GPU additions, other hardware support work, and a fair amount of kernel features added.

On the CPU side there is RISC-V NUMA support for ACPI-based systems, some small performance gains at least for Threadripper Zen 4, performance event improvements for more Intel CPUs, AES-GCM optimized versions for AVX-512/AVX10 and VAES, AMD Core Performance Boost and Fast CPPC were added to the AMD P-State driver, and AMD SEV-SNP KVM guest support is finally mainlined.
On the graphics side, the initial cut at Intel Xe2 Battlemage discrete graphics cards has landed, including the display functionality and initial device IDs. But more work on Battlemage will continue in the cycles ahead. AMD RDNA4 (GFX12) graphics cards also appear to be in preliminary good shape with Linux 6.11.
Other changes include UBIFS being hardened against power cuts, a minimum Rust toolchain version is now defined, getrandom() in the vDSO, a nice EXT4 performance optimization, the upstream kernel can now easily build a Pacman kernel package for Arch Linux systems, a new power sequencing subsystem, and more. I’ll have out my Linux 6.11 kernel feature overview in the coming days.
Stay tuned for my more extensive Linux 6.11 feature overview and the start of more Linux 6.11 kernel performance benchmarking.
As of writing, Linus Torvalds has yet to put out any formal statement of Linux 6.11-rc1 on the mailing list but the first release candidate can be downloaded via Linux Git for those interested in getting to testing right away.Update: Linus Torvalds has now posted his v6.11-rc1 announcement: “The merge window felt pretty normal, and the stats all look pretty normal too. I was expecting things to be quieter because of summer vacations, but that (still) doesn’t actually seem to have been the case.”

Saturday’s Talks: una lista delle 6 peggiori idee in ambito sicurezza informatica scritta nel 2005 può essere considerata valida oggi?

0
CyberSecurity

Tra le prerogative essenziali della scienza informatica vi è sicuramente l’evoluzione. Anche solo osservando gli ultimi 10 anni della storia informatica si nota come i concetti stessi di workload siano stati totalmente stravolti. Si è vissuta una transizione evidente da datacenter di proprietà pieni di server che virtualizzavano macchine ad ambienti cloud che erogano applicativi in modalità scalabile mediante container.

E questo è solamente uno dei tanti esempi che si potrebbero fare, e si potrebbe andare ancora più indietro, quanti hanno iniziato ad usare Linux installandolo mediante Floppy Disc, che dovevano essere montati a mano?

Insomma, per quanto quindi sia complicato (anche se certamente non impossibile) fare previsioni sul futuro è possibile comunque osservare il passato, ad esempio per capire cosa vale la pena mantenere e cosa scartare. Specialmente quando si parla di sicurezza.

È per questo che, dopo essermi imbattuto in questo articolo del 2005 scritto dall’utente mjr, al secolo Marcus Ranum, dal titolo The Six Dumbest Ideas in Computer Security (traducibile con “le 6 idee più idiote in ambito di computer security”) mi sono chiesto se, nonostante siano passati quasi 20 anni dalla stesura di quell’articolo, le idee in esso riportate sono ancora attuali.

Per rispondere partiamo anzitutto dalla lista:

Default permit: nella sua più pratica esemplificazione, aprire il firewall a tutto e poi chiudere solo quello che interessa.

Enumerating Badness: riassumibile con proteggersi da specifiche vulnerabilità, senza in realtà adottare un approccio di sicurezza globalizzato e generale.

Penetrate and Patch: in cui sostanzialmente si da il proprio sistema in mano a un espertone (a volte con proprio le chiavi di accesso, perché quelle servono) e poi, in base ai rilevamenti, si attuano le contromisure.

Hacking is Cool: ed è l’alimentazione dell’idea che individuare i problemi (e magari mettere in difficoltà intere organizzazioni) sia figo.

Educating Users: citando il worm “Anna Kournikova”, famosissimo virus che nel 2001 in cambio di un click su di un .exe prometteva foto senza veli della famosa (al tempo) tennista, ecco la riflessione secondo cui educare gli utenti significa dover “patchare” gli utenti ogni settimana, processo stupido, oltre che impossibile.

Action is better than inaction: che cita Sun Tsu secondo cui “Spesso è più facile non fare qualcosa di stupido che fare qualcosa di intelligente”.

Il panorama informatico in cui questo articolo è stato scritto è palesemente molto diverso dall’attuale certo, ma non così tanto a pensarci bene. Non penso che nessuno abbia da ridire sui punti dal primo al terzo, validi allora come oggi, così come anche tolta tutta la vena di romanticismo che il punto quattro porta con se, in realtà ci si rende conto che effettivamente l’hacking is cool menzionato qui è proprio una brutta idea da passare.

Sul punto 5 è interessante notare questa previsione riportata nell’articolo:

My prediction is that in 10 years users that need education will be out of the high-tech workforce entirely, or will be self-training at home in order to stay competitive in the job market. My guess is that this will extend to knowing not to open weird attachments from strangers.

La mia previsione è che tra 10 anni gli utenti che hanno bisogno di istruzione saranno completamente fuori dalla forza lavoro high-tech, o si auto-istruiranno a casa per restare competitivi nel mercato del lavoro. Immagino che questo si estenderà al sapere di non aprire strani allegati da sconosciuti.

Interessante perché non si è verificata, ed anzi, è ben lungi dal farlo, in un contesto dove chiunque con uno smartphone si sente autorizzato a definirsi esperto di informatica ed ancora oggi gli allegati conosciuti sono tra le prime cause di infezioni da malware e virus.

Infine il punto 6 è forse l’unico su cui è davvero il caso di notare una netta discrepanza rispetto a quanto è considerato buona norma oggi: fermo restando l’utilizzo del raziocinio, è stato dimostrato come l’approccio proattivo dello shift-left è nella sostanza indispensabile per rendere le pipeline di produzione dei software realmente vicine al concetto di sicurezza.

Ed a proposito di questo, e di come in realtà lo shift-left sia ben lungi dall’essere una pratica universalmente adottata, una cosa certamente non è cambiata dal 2005 ad oggi, e sono le schermate blu di Windows.

CrowdStrike docet.

Raoul Scarazzini

Da sempre appassionato del mondo open-source e di Linux nel 2009 ho fondato il portale Mia Mamma Usa Linux! per condividere articoli, notizie ed in generale tutto quello che riguarda il mondo del pinguino, con particolare attenzione alle tematiche di interoperabilità, HA e cloud.E, sì, mia mamma usa Linux dal 2009.

Fonte: https://www.miamammausalinux.org/2024/07/saturdays-talks-una-lista-delle-sei-peggiori-idee-in-ambito-sicurezza-informatica-scritta-nel-2005-puo-essere-considerata-valida-oggi/
Visited 1 times, 1 visit(s) today

Understanding RAID 0, 1, 5, 6, and 10

0
Sohail

RAID array, which stands for Redundant Array of Independent or Inexpensive Disks, is a technology that can enhance data storage by allowing multiple physical disks to be combined into a single logical unit. In this article, we will learn RAID, explore its various levels, and discuss how it can benefit personal and enterprise storage solutions.Understanding RAID is essential for optimizing storage performance and data redundancy. Whether you are a programmer, artist, or business owner with valuable data to protect, RAID can offer solutions to ensure data integrity and availability in case of disk failures.Introduction to RAIDRAID offers two primary benefits: improved performance and data redundancy. By spreading data across multiple disks in an array, RAID can enhance read/write operations and provide fault tolerance.There are several RAID levels, each designed to meet specific use cases and requirements.Exploring Different RAID LevelsRAID 0Known for its increased performance and capacity, RAID 0 distributes data evenly across disks but provides no redundancy. It is ideal for non-critical applications requiring high speed.RAID 1This level duplicates data across disks to provide redundancy, offering fault tolerance and data protection. However, the usable disk space is limited to the size of the smallest disk in the array.RAID 5Distributing data and parity information across disks, RAID 5 offers a balance of performance, capacity, and data redundancy. Despite using one disk for parity information, it provides fault tolerance against disk failures.RAID 6Similar to RAID 5 but with dual distributed parity, RAID 6 provides additional fault tolerance by withstanding the failure of two disks in the array.RAID 10Combining mirroring and striping, RAID 10 offers both redundancy and performance by mirroring data across sets of striped disks. It is ideal for high-performance requirements but necessitates a larger number of disks.Setting Up and Managing RAID ArraysCreating and managing RAID arrays involves checking connected disks, using commands mdadm for array creation, and understanding the rebuilding process. Regular monitoring and maintenance are crucial for ensuring the health and integrity of the array.Most Linux distributions come pre-installed with mdadm utility, but if your distro is one without it, you can use the package manager to pull it from the repository.sudo apt install mdadmCreating RAID arrayBefore creating RAID array, determine the appropriate RAID level based on your requirements (e.g., RAID 0 for performance, RAID 1 for redundancy, RAID 5 for a balance of both. and so on).sudo mdadm –create /dev/md0 –level=5 –raid-devices=2 /dev/sdX /dev/sdY/dev/md0 : Name of the array (for example, /dev/md0, /dev/md1, /dev/md2, …)–level : RAID level to be used for array–raid-devices : Number of disks to be used in the arrayAfter the array creation completes, it’ll provide us a logical disk /dev/md0. To use the disk, we need to initialize RAID array. Format the RAID array with a filesystem of your choice (e.g., mkfs.ext4).sudo mkfs.ext4 /dev/md0That’s it. The disk is now ready to be mounted anywhere. For example, we can create a directory /media/raid and mount the logical disk.sudo mount /dev/md0 /media/raidBest Practices and ConsiderationsWhile RAID is a valuable tool for storage solutions, it is not a substitute for regular backups. It is crucial to maintain a separate backup solution to safeguard against data corruption or accidental deletion. Firmware and driver updates, quick disk replacements, and monitoring are also essential for maintaining data integrity in RAID setups.In case, you set up RAID 5 (that can tolerate 1 disk failure), if a disk fails, you need to replace the failed disk with the new disk quickly. If in the meantime, a second disk fails, the entire array stops working and data becomes extremely difficult to recover and reconstruct since the data and parity blocks are lost.To check if every disk is working and the array is healthy, we can use the following command –sudo mdadm –detail /dev/md0raid array degraded state disksAs highlighted in the screenshot, the command shows the overall health of the array. When there are no problems, the array State is clean. As soon as any disk failure occurs, the state becomes degraded and the failed disk(s) are marked as faulty.Once the disk is marked as faulty or failed, you can physically remove the disk from the system and connect a new disk.The new disk can be listed by using the lsblk command. Create disk filesystem using the following command –sudo mkfs.ext4 /dev/sdd

#/dev/sde is the new deviceOnce the filesystem is created, it’s now ready to be added in the RAID array using the following command –sudo mdadm –manage /dev/md0 –add /dev/sdeAnd that’s it. RAID will automatically start the rebuilding process and reconstruct all the lost data on the new disk. Meanwhile, the array will continue to function in a degraded state. Once the rebuilding is complete, the array will return to a clean state.ConclusionIn conclusion, RAID plays an important role in data management strategies, offering a blend of performance, redundancy, and scalability. Whether for personal storage needs or enterprise-level operations, understanding RAID and selecting the appropriate level based on requirements is main for data integrity and availability.Thank you for reading this article. If you have any question, please let me know in the comment section below.

Kali Linux 2024.1 Release (Micro Mirror)

0


Hello 2024! Today we are unveiling Kali Linux 2024.1. As this is our the first release of the year, it does include new visual elements! Along with this we also have some exciting new mirrors to talk about, and of course some package changes – both new tools and upgrades to existing ones. If you want to see the new theme for yourself and maybe try out one of those new mirrors, download a new image or upgrade if you have an existing Kali Linux installation.The summary of the changelog since the 2023.4 release from December is:Introducing the Micro Mirror Free Software CDNWith this latest release of Kali Linux, our network of community mirrors grew much stronger, thanks to the help of the Micro Mirror CDN! Here’s the story.Last month we replied to a long-forgotten email from Kenneth Finnegan from the FCIX Software Mirror. The FCIX is a rather big mirror located in California, and they reached out to offer to host the Kali images on their mirror. To which we answered yes please, and that was it; shortly after, the Kali images were added to the FCIX mirror. So far so good, and it could have been the end of the story, but then Kenneth followed up:We’re now also operating another 32 other mirrors which are optimized for minimal storage and hosting only the highest traffic projects [&mldr;] Would the Kali project be willing to accept ten additional mirrors from the FCIX organization?Wow, 10 additional mirrors, that sounds very nice indeed! But, wait, 32 mirrors??? How come? Where do all those mirrors come from? That was intriguing. As it turns out, Kenneth operates a network of mirrors, which was officially announced back in May 2023 on his blog: Building the Micro Mirror Free Software CDN. For anyone interested in Internet infrastructure, we encourage you to read it, that’s a well-written blog post right there, waiting for you.So what is the Micro Mirror CDN exactly? One-liner: a network of mirrors dedicated to serving Linux and Free Software. Contrary to traditional mirrors that host around 50TB of project files, Micro Mirrors are machines with “only” a few TB of storage, that focus on hosting only the most high-demand projects. In other words: they provide additional bandwidth where it’s needed the most. Another important difference with traditional mirrors is that those machines are not managed by the sponsor (the organization that funds the mirror). Usually, a sponsor provides the bandwidth, the mirror, and also administrates it. While here, the sponsor only provides the bandwidth, and it’s the FCIX Micro Mirror team that does everything else: buy the hardware, ship it to the data-center, and then manage it remotely via their public Ansible playbook.For anyone familiar with mirroring, it’s quite exciting to see such a project taking shape. Free software and Linux distributions have been distributed thanks to community-supported mirrors for almost three decades now, it’s a long tradition. It’s true that we’ve seen some changes over the last years, and these days some of the biggest FOSS projects are entirely distributed via a CDN, leaving behind the mirroring system. For Kali Linux we use a mixed approach: it is distributed in part thanks to 50+ mirrors across the world, and in part thanks to the Cloudflare CDN that acts as a ubiquitous mirror. We are lucky to benefit from a very generous sponsorship from Cloudflare since 2019. But smaller or newer projects don’t get this chance, thus community mirrors are still essential to free software distribution. That’s why it’s nice to see a project like the Micro Mirror CDN, it’s a novel approach in the field of mirroring, and with Kali Linux we are very grateful to be part of the journey.For any organization out there that has spare bandwidth and wants to support free software, the Micro Mirror project might be something you are interested in. You might want to look at their product brief for a more thorough description of the service, and email mirror at fcix dot net for more information. we’ll just quote one line that summarize it really well:From the hosting sponsor’s perspective, the Micro Mirror is a turnkey appliance, where they only need to provide network connectivity and remote hands to install the hardware, where all sysadmin and monitor work is handled by the FCIX team with the economy of scale on our side.A big thanks to the FCIX team, and Kenneth Finnegan in particular, for their generous offer. Thanks to their help, the Kali images are now served from ten additional mirrors: seven in the US, one in Colombia, one in the UK and one in Australia.And while we are talking about mirrors: we also got plenty of new mirrors from various sponsors during this release cycle, check the dedicated section below for details.2024 Theme RefreshAs for previous 20**.1 releases, this update brings with it our annual theme refresh, a tradition that keeps our interface as cutting-edge as our tools. This year marks the unveiling of our newest theme, meticulously crafted to enhance user experience from the moment you boot up. With significant updates to the boot menu, login display, and an array of captivating desktop wallpapers, for both our regular Kali and Kali Purple editions. We are dedicated to not only advancing our cybersecurity capabilities but also ensuring that the aesthetic appeal of our platform matches the power within.Boot menu:Login display:Desktop:Kali-Purple desktop:New wallpapers:Special thanks to @arszilla for not only suggesting two wallpaper variants but also contributing to the creation of one of the default wallpapers featured in this release. These additional images were crafted to complement the background colors of the Nord and Dracula color schemes. To access these wallpapers, simply install the kali-community-wallpapers package, which also offer many other stunning backgrounds created by our community contributors.Other desktop changesXfceWe are excited to introduce a convenient enhancement to our Xfce desktop. Now, users can effortlessly copy their VPN IP address to the clipboard with just a click, simplifying the workflow and enhancing productivity for our users. To take advantage of this functionality, ensure that xclip is installed on your system (sudo apt update && sudo apt -y install xclip). With this improvement, managing your VPN connections on Kali Linux becomes even more seamless and intuitive.Thank you @lucas.parsy for your contribution that made this feature possible!Other Xfce changes:Kali-undercover updated to fix compatibility with latest XfceFixed a bug with xfce-panel and Kali’s customized cpugraph plug-inGnome-ShellFor Gnome desktop one notable change is the replacement of the eye-of-gnome (eog) image viewer with Loupe, continuing the transition to GTK4 based applications. Additionally, the latest update of Nautilus file manager arrived to Kali’s repositories, delivering a significant boost in file search speed and introducing a refreshed sidebar design.Icon ThemeFollowing with the desktop enhancements, we’ve added a few new app icons, ensuring a fully themed experience for default installations of Kali Linux. Additionally, we’ve refreshed our icon theme with new symbolic icons, enhancing consistency system-wide.Kali NetHunter UpdatesWe finally got our hands on a brand new Samsung Galaxy S24 Ultra and yes!, NetHunter rootless runs like a dream. Fortunately, Android 14 lets us disable child process restrictions in developer settings so we no longer have to use the adb command line to enable KeX support.
We have updated our documentation to reflect these changes.@yesimxev managed to add the popular Bad Bluetooth HID attack the the NetHunter app for both phones and even smartwatches!
Your browser does not support the video tag.The icons for our NetHunter and NHTerm apps have received a makeover and @kimocoder & @martinvlba spent countless days updating the codebase to ensure compatibility with the latest Android version.The community engagement is at an all time high, which is reflected by the following new kernels:Realme C15TicWatch Pro 3(Updated) Samsung Galaxy S9+Xiaomi Poco X3 NFCThanks heaps to everyone that contributed, we wouldn’t be here without you!Stay tuned as there are many more kernels already on the way!The following new tools made it into this Kali release (via the network repositories):blue-hydra – Bluetooth device discovery serviceopentaxii – TAXII server implementation from EclecticIQreadpe – Command-line tools to manipulate Windows PE filessnort – Flexible Network Intrusion Detection SystemThe focus was adding new libraries this release, and there is always numerous packages updates. Plus we also bump the Kali kernel to 6.6!There has also been a tool submitted from the community which has been merged into Kali:above – Invisible protocol sniffer for finding vulnerabilities in the networkIf you are wanting a tool in Kali quicker than what we can add, please see our blog post from a previous release.MiscellaneousBelow are a few other things which have been updated in Kali, which we are calling out which do not have as much detail:Due to the ongoing /usr-merge transition in Debian, using 2023.4 or older versions of our netboot images will no longer work. Make sure to either grab weekly image or Kali 2024.1!Friendly reminder, if you are getting “weird special characters” when trying to use keyboard shortcuts to copy/paste clipboard, the default is to use “ctrl+shift+c” and “ctrl+shift+v”.ctrl+c (without shift) in Unix is used to kill programs!Should you wish, you can alter the default behaviour in your favourite terminal programKali Website UpdatesKali DocumentationOur Kali documentation has had various updates:A way to make a project even stronger is to help its documentation. Kali is no exception. If you are able to please do contributed.Tool DocumentationOur tool documentation is always getting various updates from us, but we received a great contribution from Daniel:If you are wanting to help Kali, and give back, submitting to kali.org/tools is a great way to contributed.Kali Blog RecapSince our last release, we did the following blog posts:These are people from the public who have helped Kali and the team for the last release. And we want to praise them for their work (we like to give credit where due!):Anyone can help out, anyone can get involved!New Kali MirrorsWe have some new mirrors! Plenty of new mirrors, in fact. The last quarter was quite incredible on this front, and now is the time to give credits.Let’s start with North America:Now for the rest of the world:On top of that, as said above, there is now the Micro Mirror CDN that serves Kali images via 10 points of presence: 7 in the US, 1 in Colombia, 1 in the UK and 1 in Australia!To wrap that up: THANK YOU to all of you, individuals and companies, who provide bandwidth and help us distribute Kali to everyone out there!If you have the disk space and bandwidth, we always welcome new mirrors.Kali Team Discord ChatSince the launch of our Discord server with Kali 2022.3, we have been doing an hour long voice chat with a number of Kali team members. This is when anyone can ask questions (hopefully relating to Kali or the information security industry) to us.The next session will happen a little later than normal, Friday, 22nd March 2024 18:00 -> 19:00 UTC/+0 GMT.
It will once again be on OffSec’s Discord.Please note, we will not be making a recording of this event – its live only.Get Kali Linux 2024.1Fresh Images:
So what are you waiting for? Go get Kali already!Seasoned Kali Linux users are already aware of this, but for the ones who are not, we do also produce weekly builds that you can use as well. If you cannot wait for our next release and you want the latest packages (or bug fixes) when you download the image, you can just use the weekly image instead.
This way you will have fewer updates to do.
Just know that these are automated builds that we do not QA like we do our standard release images. But we gladly take bug reports about those images because we want any issues to be fixed before our next release!Existing Installs:
If you already have an existing Kali Linux installation, remember you can always do a quick update:┌──(kali㉿kali)-[~]
└─$ echo “deb http://http.kali.org/kali kali-rolling main contrib non-free non-free-firmware” | sudo tee /etc/apt/sources.list
[…]

┌──(kali㉿kali)-[~]
└─$ sudo apt update && sudo apt -y full-upgrade
[…]

┌──(kali㉿kali)-[~]
└─$ cp -vrbi /etc/skel/. ~/
[…]

┌──(kali㉿kali)-[~]
└─$ [ -f /var/run/reboot-required ] && sudo reboot -f
You should now be on Kali Linux 2024.1 We can do a quick check by doing:┌──(kali㉿kali)-[~]
└─$ grep VERSION /etc/os-release
VERSION=”2024.1″
VERSION_ID=”2024.1″
VERSION_CODENAME=”kali-rolling”

┌──(kali㉿kali)-[~]
└─$ uname -v
#1 SMP PREEMPT_DYNAMIC Kali 6.6.9-1kali1 (2024-01-08)

┌──(kali㉿kali)-[~]
└─$ uname -r
6.6.9-amd64
NOTE: The output of uname -r may be different depending on the system architecture.As always, should you come across any bugs in Kali, please submit a report on our bug tracker. We will never be able to fix what we do not know is broken! And Social networks are not bug trackers!Want to keep in up-to-date easier? Automate it! We have a RSS feeds and newsletter of our blog to help you.

How to create HostPath persistent volume in Kubernetes

0
How to create HostPath persistent volume in Kubernetes

This article will guide you about how to create HostPath persistent volume in Kubernetes.

You might be knowing that data in the Pod exists till the life time of the Pod. If the Pod dies all your data that belongs to the Pod is also goes away along with Pod. So if you want to persist your data beyond the life cycle of the Pod then you must have some thing called as a Persistent volume in Kubernetes.
So lets study how to How to create HostPath persistent volume which is very easy to experiment. Also to gain knowledge about the fundamentals about the Persistent volume.
There are following types of Persistent volume types available to use within kubernetes by different vendors.

GCEPersistentDisk
AWSElasticBlockStore
AzureFile
AzureDisk
CSI
FC (Fibre Channel)
FlexVolume
Flocker
NFS
iSCSI
RBD (Ceph Block Device)
CephFS
Cinder (OpenStack block storage)
Glusterfs
VsphereVolume
Quobyte Volumes
HostPath (Single node testing only — local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)
Portworx Volumes
ScaleIO Volumes
StorageOS

As you can see for the HostPath it should be used only for the testing purpose. Also it does not support multi-node cluster. In case you want to explore more about the Persistent volumes you may follow this link.
The Basic process for Persistent volumes is as follows:

K8s admin create the persistence volume in cluster.
User will claim it using Persistent volume claim once they claimed it status becomes “Bound”.
Then Pod use that volume for storing out the data which will persist across the life-cycle of Pod.

Enough for the theory Part Lets jump the Technical steps towards it:

Create the persistent volume

In this step we are using following manifest yaml file to achieve the same.
# cat hostpath-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-hostpath
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
– ReadWriteOnce
hostPath:
path: “/tmp/kube”

As shown in the above definition file it is for the size 1GB. Path is “/tmp/kube”. Lets create the PV as below:
# kubectl create -f hostpath-pv.yaml
persistentvolume/pv-hostpath created

Recheck the PV and persistent volume claim using below command:
# kubectl get pv,pvc -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/pv-hostpath 1Gi RWO Retain Available manual 6s Filesystem

As you can see the PV is created having status as Available and since we haven’t specified the reclaim policy default is applied which is “Retain” meaning that the even if the pvc (Persistent volume claim) gets deleted the PV and data wont get deleted automatically. We will test it out that also in a bit.

Create the Persistent volume claim

In order to use the PV we need to create the Persistent volume claim or pvc to use it.  Here is the manifest yaml file for the same.
# cat pvc-hostpath.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-hostpath
spec:
storageClassName: manual
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 100Mi

Kindly note in the above definition that the claim is only for the 100mb(>= size of PV) also the Access mode is “ReadWriteOnce” which is same as that of PV. Hence we can able to create the PVC as below:
# kubectl create -f pvc-hostpath.yaml
persistentvolumeclaim/pvc-hostpath created

Check the status of pv and pvc.
# kubectl get pv,pvc -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/pv-hostpath 1Gi RWO Retain Bound default/pvc-hostpath manual 20s Filesystem

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/pvc-hostpath Bound pv-hostpath 1Gi RWO manual 4s Filesystem

You will see that the status of the pv becomes Bound from Available which was earlier.

Create the Pod to utilize this PV as a mount point inside it.

# cat busybox-pv-hostpath.yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
volumes:
– name: host-volume
persistentVolumeClaim:
claimName: pvc-hostpath
containers:
– image: busybox
name: busybox
command: [“/bin/sh”]
args: [“-c”, “sleep 600”]
volumeMounts:
– name: host-volume
mountPath: /tmp/mydata

As describe in the Pod definition file it will create the mount point /tmp/mydata inside the Pod. Lets create the Pod using above definition file.
# kubectl create -f busybox-pv-hostpath.yaml
pod/busybox created

Check the status and inspect the Pod:
# kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/busybox 1/1 Running 0 2m4s 10.244.1.114 kworker01 <none> <none>

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35d <none>

# kubectl describe pod busybox
Name: busybox
Namespace: default
Priority: 0
Node: kworker01/10.253.121.32
Start Time: Mon, 06 Jul 2020 02:43:16 -0400
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.1.114
IPs:
IP: 10.244.1.114
Containers:
busybox:
Container ID: docker://6d1cfa9b6440efe2770244d1edc6a78c0dd7649bbf905121e70a013ad3b1dd1e
Image: busybox
Image ID: docker-pullable://busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
sleep 600
State: Running
Started: Mon, 06 Jul 2020 02:43:25 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/tmp/mydata from host-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-49xz2 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
host-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc-hostpath
ReadOnly: false
default-token-49xz2:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-49xz2
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
—- —— —- —- ——-
Normal Scheduled <unknown> default-scheduler Successfully assigned default/busybox to kworker01
Normal Pulling 64s kubelet, kworker01 Pulling image “busybox”
Normal Pulled 58s kubelet, kworker01 Successfully pulled image “busybox”
Normal Created 58s kubelet, kworker01 Created container busybox
Normal Started 57s kubelet, kworker01 Started container busybox

In the describe output you can see that, /tmp/mydata volume got created using host-volume from the claim pvc-hostpath. Also the Pod is scheduled/create on the node “kworker01”.
Lets login inside the Pod to create the sample file. In order to demonstrate the life cycle of the data even if the Pod dies.
# kubectl exec -it busybox — sh
/ # hostname
busybox
/ # cd /tmp/
/tmp # ls
mydata
/tmp # cd mydata/
/tmp/mydata # echo “hello from K8S” > Hello.txt
/tmp/mydata # ls -ltr
total 4
-rw-r–r– 1 root root 15 Jul 6 06:46 Hello.txt
/tmp/mydata #

In the above demo we have created file “Hello.txt” inside /tmp/mydata. Now lets delete the Pod.
# kubectl delete pod busybox
pod “busybox” deleted
root@vbhost:~/kubernetes/yamls# kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35d

Pod got deleted successfully lets login to the node “kworker01” where Pod got scheduled earlier to check if the data still persist after the deletion of the Pod.
sh-4.2# hostname
kworker01
sh-4.2# cd /tmp
sh-4.2# ls
kube
sh-4.2# cd kube/
sh-4.2# ls
Hello.txt
sh-4.2# cat Hello.txt
hello from K8S
sh-4.2# exit

You can see that our file “Hello.txt” still exists on the Node even the Pod dies.
So this is all about “How to create HostPath persistent volume” in Kubernetes.
 

Top Linux Interview Questions – The Wandering Irishman

0
Top Linux Interview Questions – The Wandering Irishman

So you want to ace that interview for a Linux position in a company and want to know what would be the interview questions you really need to know? Let’s get into it. We’ll start with a few harder ones and ease you into the easy ones 😉 How do you check for free disk… So you want to ace that interview for a Linux position in a company and want to know what would be the interview questions you really need to know?
Let’s get into it. We’ll start with a few harder ones and ease you into the easy ones 😉
How do you check for free disk space?
df -ah

df is your friend. In an interview you will be expected to talk a little about how these files are taking up space in the file system and what they are for, so do some extra research into those.
How can you see the kernel version of the machine?
uname -a

You can also use the -v flag for just the version or -r for the release.
How do you start a service on a Linux System?
You will need to use the service command followed by the service you want to start, for instance, to start up Apache2 server.
service apache2 start

You can also use status to see the status of the service or stop to kill the process.
How do you check the IP of a Linux system?
ifconfig

You can get a lot more details if you specify the interface you want, for instance if you wanted to only see interface eth0.
ifconfig eth0
How do you check for open ports on Linux?
netstat

You can refine the search to only display UDP connections.
netstat -tulpn

What command would you use to print the working directory?
This one comes up a lot to try to catch you out, but the answer is in the question.
[P]rint  [W]orking  [D]irectory.
pwd

Let’s say you have an Apache web server running on Linux, where would you find the server’s index.html file?
Easy, but good to know.
cd /var/www/html

How, as a root user, do you give full permissions to a file to read, write and execute.
Let’s create a file called hello.py for this example.
sudo nano hello.py

print(“hello world!”)

cat hello.py

So we have the file called hello.py that we just created.
We can see it in white. This means it has not been given permissions to all users. So let’s change that.
sudo chmod 777 hello.py

Now we can see the file is green and ready to execute. You could also give it the same permissions using the following command.
sudo chmod -x hello.py

How do you see all of the running processes on Linux?
Most people will use ps.

But if you are a real Linux pro the following command is what will set you apart from other candidates. And it’s an easy one.
top

I really hope this helps you in that all important interview and that you nail it.
Please leave a like or a comment and of course follow the blog for more great posts.
Thanks for reading.
We do this for free so please consider contributing to us below. It really helps.
 

QuBits 2020-07-10
 

How to add a GitHub connection from an AWS account?

0
How to add a GitHub connection from an AWS account?

Published: February 5, 2024 | Modified: February 5, 2024

In this blog post, we will guide you through a step-by-step process to establish a GitHub connection in an AWS account.

Creating GitHub Connection for AWS

What is a connection?

Firstly, let’s understand the concept of a connection in the AWS world. In AWS, a connection refers to a resource that is used for linking third-party source repositories to various AWS services. AWS provides a range of Developer tools, and when integration is required with third-party source repositories such as GitHub, GitLab, etc., the connection serves as a means to achieve this.

Adding a connection to connect GitHub with AWS

Let’s dive into the step-by-step procedure to add a connection that helps your AWS account to talk with your personal GitHub repositories.

AWS Developer Tools Connection console

On a wizard screen, select Github and name your connection.

Click on Connect to GitHub button

Create Connection wizard

Now, AWS will try to connect to GitHub and access your account. Ensure you are already logged into GitHub and you should see below authorization screen. If not, you will need to login to GitHub first.

Authorize AWS connector for GitHub

You can review the permissions being allowed to AWS on your account by clicking Learn more link on this screen.

Click on Authorize AWS Connector for GitHub

After authorizing the AWS connector, you should be back to the GitHub connection settings page.

At this point, AWS requires a GitHub Apps detail that will allow Amazon to access your GitHub repositories and make modifications to them.

AWS also offers to create a GitHub app on your behalf if it’s not created already. You can use the Install a new app button here to let AWS create the GitHub app in your account.

In that case, you need to verify the configuration (repo selection) and then click the Install button.

Installing AWS Connector GitHub App

Once the App is created, the GitHub Apps ID will be populated in the wizard or manually enter the ID if the App is already created.

GitHub Apps details for creating a connection

Click on Connect button

You should be greeted with a success message with the new connection created!

GitHub Connection is created!

Your GitHub connection is now ready. You can use this connection in compatible AWS services and let those services access your Github repositories.

CloudLinux Announces Support for Virtuozzo and OpenVZ Containers

0
CloudLinux Announces Support for Virtuozzo and OpenVZ Containers

CloudLinux has an ongoing commitment to supporting the diverse virtualization needs of its clients. In a significant update, CloudLinux is now officially providing support for Virtuozzo and OpenVZ containers, reversing a previous decision to limit support to hypervisors only. This change, driven by customer demand, marks a new chapter in the flexibility and functionality of CloudLinux OS.

Historical Context
Back in 2019, CloudLinux announced that it would discontinue support for Virtuozzo and OpenVZ containers, focusing instead on supporting only the hypervisor aspects of Virtuozzo and OpenVZ, along with other hypervisors such as Xen, KVM, and VMware. This decision was outlined in a blog post which emphasized the move to support hypervisors only, leaving container support aside. This led to numerous inquiries from clients regarding the continued support for containers.
 
Reintroduction of Container Support
In response to the increasing demand from customers who operate containerized environments, CloudLinux revisited its strategy in 2022. Recognizing the necessity for CloudLinux OS to function within containerized environments, the company decided to extend support to Virtuozzo and OpenVZ containers. This support was initially implied within the context of CloudLinux Solo and CloudLinux Admin releases, but explicit communication about the support for Virtuozzo and OpenVZ containers was not clearly stated until now.
For detailed information on installing CloudLinux in these environments, please refer to the CloudLinux installation documentation.
 
Enhanced CloudLinux Offerings
The reintroduction of container support comes with specific features and limitations. Notably, CloudLinux working inside containers will not support LVE (Lightweight Virtual Environment) features due to inherent restrictions. However, this limitation is mitigated by the robust features offered in CloudLinux Solo and CloudLinux Admin, which prioritize website performance and security without the need for LVE limiting.
 
Key features included in CloudLinux Solo and Admin are:

AccelerateWP: Enhances WordPress performance with tailored optimization.
PHP X-Ray: Provides in-depth PHP performance analysis.
Mod_lsapi: Delivers improved PHP performance with LiteSpeed’s API.
Website Monitoring Tool: Continuously tracks website uptime and performance.
Slow Site Analyzer and Autotracing: Identifies and troubleshoots slow-performing sites.
PHP Selector + Hardened PHP: Allows users to select the PHP version while ensuring security.
Node.js/Python/Ruby Selector: Supports various development environments.

 
In addition, security features such as CageFS and symlink protection (the latter exclusive to CloudLinux Admin) enhance the security posture of web hosting environments.
 
Seamless Edition Switching 
A notable advancement in 2024 is the ability to switch seamlessly between different CloudLinux editions without needing to reinstall the system. This flexibility ensures that users can adapt their CloudLinux installations to evolving needs without disruptive transitions. This mechanism is also available in Virtuozzo and OpenVZ environments, providing the same seamless switching capabilities. For more details on this capability, see the blog post on seamless switching blog post on seamless switching.
CloudLinux’s renewed support for Virtuozzo and OpenVZ containers underscores its dedication to meeting the dynamic needs of its clients, ensuring that both containerized and hypervisor-based virtualization environments are well-supported.