Home Blog Page 33

Sparky 7.4 – SparkyLinux

0
SparkyLinux

The 4th update of Sparky 7 – 7.4 is out.
It is a quarterly updated point release of Sparky 7 “Orion Belt” of the stable line. Sparky 7 is based on and fully compatible with Debian 12 “Bookworm”.
Changes:– all packages updated from Debian and Sparky stable repos as of June 9, 2024– Linux kernel PC: 6.1.90 LTS (6.9.4, 6.6.32-LTS, 6.1.92-LTS, 5.15.160-LTS in sparky repos)– Linux kernel ARM: 6.6.31 LTS– LibreOffice 7.4.7– KDE Plasma 5.27.5– LXQt 1.2.0– MATE 1.26– Xfce 4.18– Openbox 3.6.1– Firefox 115.11.0esr (127.0 & 115.12.0esr in sparky repos)– Thunderbird 115.11.0– added Debian backports Linux kernels installation to APTus: 64bit, 64bit RT, 686-pae, 686-pae RT, 686 non-pae
Sparky 7.4 “Orion Belt” is available in the following versions:– amd64 BIOS/UEFI+Secure Boot: Xfce, LXQt, MATE, KDE Plasma, MinimalGUI (Openbox) & MinimalCLI (text mode)– i686 non-pae BIOS/UEFI (Legacy): MinimalGUI (Openbox) & MinimalCLI (text mode)– ARMHF & ARM64 Openbox & CLI
Due to an error being detected at the last moment, the Xfce and MATE iso have been reconfigured and recreated as the 7.4.1.
Make sure that the ‘os-prober’ will be not executed to detect other bootable partitions as default, but Sparky provides a GRUB option to detect other OSes anyway.But, a next updating of GRUB packages override the option.To fix that manually, add the line:GRUB_DISABLE_OS_PROBER=falseon the end of the file (as root):/etc/default/grubThen update grub:sudo update-grub
PC live user:password = live:liveARM user:password = pi:sparky
If you have Sparky 7 installed – simply keep it up to date. No need to reinstall your OS.
New ISO/IMG images of Sparky 7 “Orion Belt” can be downloaded from the download/stable page.
Informacja o wydaniu w języku polskim: https://linuxiarze.pl/sparky-7-4/

Facebook
Twitter
Reddit
Tumblr

(Updated) Forlinx’s New SoM Leverages Rockchip RK3562J Quad-Core Processor Forlinx’s New SoM Leverages Rockchip RK3562J Quad-Core Processor

0
(Updated) Forlinx’s New SoM Leverages Rockchip RK3562J Quad-Core Processor Forlinx’s New SoM Leverages Rockchip RK3562J Quad-Core Processor

Jul 24, 2024 — by Giorgio Mendoza

88 views

Twitter
Facebook
LinkedIn
Reddit
Pinterest
Email

Forlinx Embedded has launched the FET3562J-C SoM, a versatile system on module with an optional 1 TOPS NPU, optimized for a broad range of applications including industrial automation, consumer electronics, smart healthcare, energy, and telecommunications.

The FET3562J-C SoM, powered by the Rockchip RK3562J processor with advanced 22nm process technology, features four ARM Cortex-A53 cores operating up to 1.8 GHz. It offers 1GB or 2GB LPDDR4 RAM and 8GB or 16GB eMMC storage options.

How to configure ingress controller in kubernetes to run multi domain-subdomain application

0
How to configure ingress controller in kubernetes to run multi domain-subdomain application

In this article we are going to learn  how to configure ingress controller in kubernetes so that we could run multi domain or subdomain application within same kubernetes cluster.

In this demo we are going to run application with domain “nginx.example.com” and other two subdomain called
tea.myshop.com and coffee.myshop.com in kubernetes using Nginx ingress controller. Please note that, these domains and subdomains are local domains.
To show case this demo about “How to configure ingress controller in kubernetes” I am using lxc containers install on Ubuntu(Bare-metal installation). Basically I have four containers/vms running on the lxc host as below:
root@vbhost:~# lxc list
+———–+———+————————+———————————————–+————+———–+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+———–+———+————————+———————————————–+————+———–+
| haproxy | RUNNING | 10.253.121.146 (eth0) | fd42:38af:bc0d:704d:216:3eff:fefa:cb4c (eth0) | PERSISTENT | 0 |
+———–+———+————————+———————————————–+————+———–+
| kmaster | RUNNING | 172.17.0.1 (docker0) | fd42:38af:bc0d:704d:216:3eff:fe31:7a49 (eth0) | PERSISTENT | 0 |
| | | 10.253.121.39 (eth0) | | | |
| | | 10.244.0.0 (flannel.1) | | | |
+———–+———+————————+———————————————–+————+———–+
| kworker01 | RUNNING | 172.17.0.1 (docker0) | fd42:38af:bc0d:704d:216:3eff:fed4:8226 (eth0) | PERSISTENT | 0 |
| | | 10.253.121.32 (eth0) | | | |
| | | 10.244.1.1 (cni0) | | | |
| | | 10.244.1.0 (flannel.1) | | | |
+———–+———+————————+———————————————–+————+———–+
| kworker02 | RUNNING | 172.17.0.1 (docker0) | fd42:38af:bc0d:704d:216:3eff:fe9f:82c0 (eth0) | PERSISTENT | 0 |
| | | 10.253.121.89 (eth0) | | | |
| | | 10.244.2.1 (cni0) | | | |
| | | 10.244.2.0 (flannel.1) | | | |
+———–+———+————————+———————————————–+————+———–+

So all these containers are running centos. Three of them are being used for kubernetes cluster. One will be using for haproxy which we will be utilizing as a loadbalancer to load balance the request to one of the two kubernetes worker node.

To learn more about lxc containers follow this link.
Steps:

Deploy kubernetes cluster:

Once you deployed three lxc containers configure one of them as kubernetes master and other two as kubernetes worker node. To cut short the configuration journey  you can follow this link.

Deploy ha-proxy container with centos operating system:

We will be deploying one more lxc container with centos and will be configuring haproxy as per below steps:

root@vbhost:~# lxc exec haproxy bash
[root@haproxy ~]#
Now install the haproxy package in the container
# yum install haproxy
Once you have installed haproxy we need change the configuration so that it will load balance the traffic between the kubernetes worker node. Open the /etc/haproxy/haproxy.cfg file and replace it with below contents:
[root@haproxy ~]# cat /etc/haproxy/haproxy.cfg
#———————————————————————
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#———————————————————————

#———————————————————————
# Global settings
#———————————————————————
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the ‘-r’ option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2

chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon

# turn on stats unix socket
stats socket /var/lib/haproxy/stats

#———————————————————————
# common defaults that all the ‘listen’ and ‘backend’ sections will
# use if not designated in their block
#———————————————————————
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000

frontend http_front
bind *:80
stats uri /haproxy?stats
default_backend http_back

backend http_back
balance roundrobin
server kube 10.253.121.32:80
server kube 10.253.121.89:80
[root@haproxy ~]#

Make sure you change the bottom two ip address with ip address of the kubernetes worker nodes so that it could be load balanced. Here are the ip adress of my worker node which we have entered in the above haproxy configuration file. Also we are using http port 80 for our configuration.
# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kmaster Ready master 31d v1.17.1 10.253.121.39 <none> CentOS Linux 7 (Core) 4.15.0-106-generic docker://19.3.5
kworker01 Ready <none> 31d v1.17.1 10.253.121.32 <none> CentOS Linux 7 (Core) 4.15.0-106-generic docker://19.3.5
kworker02 Ready <none> 31d v1.17.1 10.253.121.89 <none> CentOS Linux 7 (Core) 4.15.0-106-generic docker://19.3.5
#

Now restart and enable the haproxy service.
# systemctl enable haproxy
# systemctl restart haproxy
We are done with haproxy configuration. Now logout from it.

Install the Nginx-controller on kubernetes master node:

Clone the git repo  and change directory to “kubernetes-ingress/deployments/”
# git clone https://github.com/nginxinc/kubernetes-ingress.git
# cd kubernetes-ingress/deployments/

Create the namespace and service account.
# kubectl apply -f common/ns-and-sa.yaml
Apply the role and cluster role binding
# kubectl apply -f rbac/rbac.yaml
Make the secret
# kubectl apply -f common/default-server-secret.yaml
apply the configmap required.
# kubectl apply -f common/nginx-config.yaml
Now deploy the Ingress controller as daemon-set
# kubectl apply -f daemon-set/nginx-ingress.yaml
Now if you check the namespace and the resources within “nginx-ingress” namespace you will find similar resources being created.
# kubectl get ns
NAME STATUS AGE
cattle-system Active 29d
default Active 31d
efk Active 29d
kube-node-lease Active 31d
kube-public Active 31d
kube-system Active 31d
nginx-ingress Active 10s

# kubectl get all -n nginx-ingress
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-2rq6s 0/1 ContainerCreating 0 14s
pod/nginx-ingress-65vnd 0/1 ContainerCreating 0 14s

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/nginx-ingress 2 2 0 2 0 <none> 14s

Once they are ready we are going to deploy services.

Deploy the Nginx service with NodePort type listening on port 80 which will run sample Nginx web-server with its default index.html file. Here is manifest file I am using for it.

# cat nginx-deploy-main.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx-deploy-main
spec:
replicas: 1
selector:
matchLabels:
run: nginx-main
template:
metadata:
labels:
run: nginx-main
spec:
containers:
– image: nginx
name: nginx



apiVersion: v1
kind: Service
metadata:
name: nginx-deploy-main
spec:
type: NodePort
selector:
run: nginx-main
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
– port: 80

Deploy the Nginx service
# kubectl create -f nginx-deploy-main.yaml
deployment.apps/nginx-deploy-main created
service/nginx-deploy-main created

Now lets deploy the another service which will be of the type NodePort with port 80.  For this service I am using my own created docker image called “manmohanmirkar/mytea_image” which will simply display the message “This is Tea Shop” in the browser. This service will be get called once you  try to access url “tea.myshop.com” . Sample manifest file is as below:
# cat tea.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: tea
name: tea-deploy
spec:
replicas: 2
selector:
matchLabels:
app: tea
template:
metadata:
labels:
app: tea
spec:
containers:
– name: tea
image: manmohanmirkar/mytea_image:latest


apiVersion: v1
kind: Service
metadata:
name: tea-deploy
spec:
type: NodePort
selector:
app: tea
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
– port: 80

Apply the tea service:
# kubectl create -f tea.yml
deployment.apps/tea-deploy created
service/tea-deploy created

Configure the coffee service

Deploy one more service which will be of the type NodePort with port 80.  For this service I am using my own created docker image called “manmohanmirkar/mycoffee_image” which will simply display the message “This is Coffee Shop” in the browser. This service will be get called once you  try to access url “coffee.myshop.com” . Sample manifest file is as below:
# cat coffe.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: coffee
name: coffee-deploy
spec:
replicas: 2
selector:
matchLabels:
app: coffee
template:
metadata:
labels:
app: coffee
spec:
containers:
– name: coffee
image: manmohanmirkar/mycoffee_image:latest

apiVersion: v1
kind: Service
metadata:
name: coffee-deploy
spec:
type: NodePort
selector:
app: coffee
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
– port: 80
Now lets apply the coffee service and deployment.
# kubectl create -f coffe.yml
deployment.apps/coffee-deploy created
service/coffee-deploy created

Deploy the Ingress resource

This is the main configuration in this article which will configure the Ingress resource. This Ingress resource will be performing the task of routing the request to respective service based on the URL. Let have look at the manifest file for the same:
# cat cafe-ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress-resource
spec:
rules:
– host: nginx.example.com
http:
paths:
– backend:
serviceName: nginx-deploy-main
servicePort: 80
– host: tea.myshop.com
http:
paths:
– backend:
serviceName: tea-deploy
servicePort: 80
– host: coffee.myshop.com
http:
paths:
– backend:
serviceName: coffee-deploy
servicePort: 80

Basically in the above file will deploy the resource type Ingress. It has got three host based rules configured for each service. So if the host url is “nginx.example.com” then it has backend with service Name “nginx-deploy-main”. Basically it means that whenever the request with URL “nginx.example.com” made it will be get forwarded to the service “nginx-deploy-main”.
In the similar way we have two more hosts “tea.myshop.com” and “coffee.myshop.com” with backed configured as “tea-deploy” and “coffee-deploy”. This also means that whenever the request made with URL “tea.myshop.com” to the haproxy serrver it will forwarded to the service “tea-deploy”. Also in the similar way if the request with URL “coffee.myshop.com” it will be automatically get forwarded to the service “coffee-deploy” which we have already deployed.
Now lets deploy the ingress resource:
# kubectl create -f cafe-ingress.yml
ingress.networking.k8s.io/my-ingress-resource created

Crosscheck the ingress resource with following commands:
# kubectl describe ing
Name: my-ingress-resource
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints “default-http-backend” not found>)
Rules:
Host Path Backends
—- —- ——–
nginx.example.com
nginx-deploy-main:80 (10.244.2.82:80)
tea.myshop.com
tea-deploy:80 (10.244.1.94:80,10.244.2.83:80)
coffee.myshop.com
coffee-deploy:80 (10.244.1.95:80,10.244.2.84:80)
Annotations: <none>
Events:
Type Reason Age From Message
—- —— —- —- ——-
Normal AddedOrUpdated 31s nginx-ingress-controller Configuration for default/my-ingress-resource was added or updated
Normal AddedOrUpdated 31s nginx-ingress-controller Configuration for default/my-ingress-resource was added or updated

You can able to see from the output that, if the host is “nginx.example.com” then service configured is “nginx-deploy-main” and if the host or URL is “tea.myshop.com” then service for the same is “tea-deploy” sample applicable for the coffee service with respective service name.

This is the final step in which we are going to configure the DNS entries for all the three URL we have configured in the ingress resource. Basically we are going make the entries on the host machine with all the urls with ip of out haproxy container.
# lxc list haproxy
+———+———+———————–+———————————————–+————+———–+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+———+———+———————–+———————————————–+————+———–+
| haproxy | RUNNING | 10.253.121.146 (eth0) | fd42:38af:bc0d:704d:216:3eff:fefa:cb4c (eth0) | PERSISTENT | 0 |
+———+———+———————–+———————————————–+————+———–+

So ip address of haproxy is 10.253.121.146. Simply add below entries /etc/hosts of the host machine:
# cat /etc/hosts
127.0.0.1 localhost
192.168.56.5 vbhost

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters



10.253.121.146 nginx.example.com
10.253.121.146 coffee.myshop.com
10.253.121.146 tea.myshop.com

Check the last three entries all url are pointing to haproxy which will in turn forward the request to our kubernetes worker node.

Now lets try to access the all URL’s one by one from the host machine.

So this all about the topic of How to configure ingress controller in kubernetes.

Exploiting F5 Big IP Vulnerability

0
Exploiting F5 Big IP Vulnerability

CVE-2020-5902 is a critical remote code execution vulnerability in the configuration interface (aka Traffic Management User Interface – TMUI) of BIG-IP devices used by some of the world’s biggest companies. So today we are going to demonstrate how it is being used. To exploit CVE-2020-5902, an attacker needs to send a specifically crafted HTTP request… CVE-2020-5902 is a critical remote code execution vulnerability in the configuration interface (aka Traffic Management User Interface – TMUI) of BIG-IP devices used by some of the world’s biggest companies.
So today we are going to demonstrate how it is being used.
To exploit CVE-2020-5902, an attacker needs to send a specifically crafted HTTP request to the server hosting the Traffic Management User Interface (TMUI) utility for BIG-IP configuration.
So first we need to find a vulnerable device, let’s go to Shodan.
In the search box top left type:
title:”BIG-IP&reg;”
This will give us a list of vulnerable devices.

Now let’s navigate to one of the servers.

Perfect.
Open Burp Suite. This will intercept our traffic before it hits the server so that we can modify the process.
In Burp suite turn on the Intercept and refresh the page.

Okay we have the domain name, so before we send it to the server we are going to modify the URL and add some text.
https://{host}/tmui/login.jsp/..:/tmui/locallb/workspace/fileRead.jsp?fileNam
e=/etc/passwd
So let’s move to the Repeater tab to send the request.

As you can see it prints out the /etc/passwd file, you can use other commands too like /etc/hosts. Have fun but don’t do anything illegal. You will probably get a call from the..(insert law inforcment)
QuBits 2020-08-07
Support us below 😉

Linux Foundation Certified Engineer (LFCE)

What Are the Most Important Factors for a Successful Rental Property – NoobsLab

0
What Are the Most Important Factors for a Successful Rental Property - NoobsLab

Rental properties have the potential to be enormously profitable. If your monthly expenses for a rental property amount to about $2,000, and you can charge $2,500 in rent, you’ll make a gross profit of $500 every month. That may not seem like much, but if you have a portfolio full of properties, your income can quickly snowball. And if you benefit from property appreciation, you’ll see even better results long term.

The thing is, not every rental property has the same potential. Some rental properties are going to be much more profitable for you in the long term than others. So what factors should you be examining when searching for a rental property? Which qualities and elements are most closely correlated with rental property success?

Determining Your Own Strategy: What Is Success?
First, you need to understand that success means different things to different people. Some real estate investors are almost exclusively interested in cash flow, and they won’t even consider a property that doesn’t reach a certain threshold of monthly profitability. Other investors are more interested in long-term gains, so they’re more than willing to forgo monthly profitability if it means better results over the course of a few years or decades.

What’s important is that you have a solid strategy for yourself in place. What goals are you trying to achieve? What is your investing philosophy? What is the context of your real estate holdings in your overall investment portfolio?

Working with real estate agents, Houston property management experts, and other experienced real estate experts can make this process easier. They can challenge your biases, teach you new things, address inconsistencies within your strategy, and help you clarify your overall goals. No one should have to pursue real estate investing entirely alone.

The Most Important Factors for a Successful Rental Property

In most cases, these are the most important factors for success in rental property investing:

Property age and condition. Think about the property age and condition. Generally, the older the property is, the more problems you’re going to have with it. If the property’s in good shape, you’ll have far lower maintenance and repair costs. If it needs extra work or care, it may still be profitable – but you’ll need to work those costs into your profitability equations. You can even hire a professional that offers property styling services.

Current rental demand. Next, you’ll need to consider current rental demand. How many people are renting in this neighborhood? How many people are eager to rent in this neighborhood? When a property in this neighborhood is listed as available, how quickly is it filled? What price is being charged for rent for properties like yours in the area? The more demand there is for your property, and the higher rental prices are, the better.

Neighborhood quality. Neighborhood quality is a complex concept, but it’s one that’s important to practically every rational tenant. People look for neighborhoods with low crime rates, good schools, and friendly people. The better the neighborhood is, the more people are going to want to live there – and the more they’ll be willing to pay for the privilege.

Access to transportation. People want to live in properties with access to transportation. If the property is near a main road or preferably, several main roads, it’s going to be associated with much higher demand. The same is true if the property is near a bus stop or alternative mode of public transportation.

Access to amenities. Access to amenities is also favorable, as people want access to gyms, parks, libraries, grocery stores and other accommodations. Convenience is hugely beneficial to tenants.

Job opportunities. You should also keep an eye on job opportunities in the area. Neighborhoods near major employers tend to see faster, more aggressive growth than their counterparts.

Long-term momentum. Next, think about the overall long-term momentum of this neighborhood. Look at factors like total population, rental prices, vacancy rates, and economic growth to determine where things are headed. Is this area in an upswing or a downswing? Where do you see things going over the next 10-20 years? How could things change during that time?

Vacancy rate. Vacancies have the power to crush even the most promising rental property investments, so it’s important to look at the vacancy rate of this property as well as the neighboring properties that surround it. If vacancies seem to be a problem here, take it as a red flag.

Purchase price. One of the most common rules of thumb in the real estate investing world is the one percent rule, which advises property investors to only consider properties that can justify charging gross monthly rent that exceeds one percent of the purchase price. Obviously, this isn’t a hard rule and it’s not going to make sense for every property or every investor. But it does clarify just how important your purchase price is. Almost any rental property is worth considering purchasing if the price is right – and even a hypothetically perfect property is worth dismissing if the price is too high.

If you can find a property that meets or exceeds expectations in all these categories, there’s a good chance that it will make for a successful rental property. Of course, as with all financial decisions of this magnitude, it’s important to be thorough with your due diligence and explore many options before making an offer.

SUSE Receives 37 Badges in the Summer G2 Report

0
SUSE Receives 37 Badges in the Summer G2 Report

I’m pleased to share that G2, the world’s largest and most trusted tech marketplace, has recognized SUSE’s solutions in its 2024 Summer Report. We received 37 badges across our business units for Rancher Prime, Longhorn, SUSE Linux Enterprise Server (SLES), SUSE Manager – as well as one badge for the openSUSE community with Tumbleweed. We also received the overarching “Users Love Us” badge.
We continue to build on the momentum of providing more than 30 years of service to our customers, partners and the open source communities while innovating in areas such as edge and AI. Receiving 37 badges this quarter – 7 more than the Winter report – showcases the depth and breadth of our strong product portfolio and the dedication our team provides for our customers.
G2 awarded Rancher 11 badges, including Momentum Leader, Leader Enterprise, Leader EMEA, Leader Enterprise EMEA, Leader Small Business and High Performer Asia.
Longhorn made the list for the second time with a High Performer badge.
SLES received 14 badges, including Momentum Leader, Leader, High Performer, High Performer Mid Market, High Performer Europe, Easiest to do Business With and Best Support Enterprise.
SUSE Manager (SUMA) received two badges: Easiest to do Business With and Best Meets Requirements.
Here’s what some of our customers said in their reviews on G2:
Rancher
“Rancher is a powerful tool for managing clusters efficiently. Its intuitive interface simplifies complex tasks, offering seamless control over Kubernetes clusters. With Rancher, you can easily deploy, scale and monitor your applications, ensuring optimal performance and resource utilization. Overall, leveraging Rancher for cluster management can significantly enhance your workflow and streamline operations.”
SUSE Manager:
“[SUSE Manager] is a powerful tool for patching, software management, and overall management, on a variety of Linux distributions. It gives a very good overview of the systems in terms of compliance, but also a granular level of software access on the individual systems. It is relatively easy to form an overview of high priority vulnerabilities.”
SUSE Linux Enterprise Server:
“A really useful Linux distribution, stable and very fast when you need support.”
What’s Next?
Visit G2 to read reviews and share your review of SUSE solutions.
(Visited 1 times, 1 visits today)

rwatch – Rust implementation of watch

0
Alternatives to popular CLI tools: watch

rwatch is a command-line utility written in Rust that allows you to run a command repeatedly and watch its output.
It’s a Rust re-implementation of the classic Unix watch command.
This is free and open source software.

Features include:

Run a given command repeatedly.
Clear screen between command runs.
Customizable interval for command execution.
Handle user interruption gracefully.
Cross-platform support.

Website: github.com/davidhfrankelcodes/rwatchSupport:Developer: davidhfrankelcodesLicense: MIT License

rwatch is written in Rust. Learn Rust with our recommended free books and free tutorials
Alternatives to watch

Popular series

The largest compilation of the best free and open source software in the universe. Each article is supplied with a legendary ratings chart helping you to make informed decisions.

Hundreds of in-depth reviews offering our unbiased and expert opinion on software. We offer helpful and impartial information.

Replace proprietary software with open source alternatives: Google, Microsoft, Apple, Adobe, IBM, Autodesk, Oracle, Atlassian, Corel, Cisco, Intuit, and SAS.

Awesome Free Linux Games Tools showcases a series of tools that making gaming on Linux a more pleasurable experience. This is a new series.

Machine Learning explores practical applications of machine learning and deep learning from a Linux perspective. We’ve written reviews of more than 40 self-hosted apps. All are free and open source.

New to Linux? Read our Linux for Starters series. We start right at the basics and teach you everything you need to know to get started with Linux.

Alternatives to popular CLI tools showcases essential tools that are modern replacements for core Linux utilities.

Essential Linux system tools focuses on small, indispensable utilities, useful for system administrators as well as regular users.

Linux utilities to maximise your productivity. Small, indispensable tools, useful for anyone running a Linux machine.

Surveys popular streaming services from a Linux perspective: Amazon Music Unlimited, Myuzi, Spotify, Deezer, Tidal.

Saving Money with Linux looks at how you can reduce your energy bills running Linux.

Home computers became commonplace in the 1980s. Emulate home computers including the Commodore 64, Amiga, Atari ST, ZX81, Amstrad CPC, and ZX Spectrum.

Now and Then examines how promising open source software fared over the years. It can be a bumpy ride.

Linux at Home looks at a range of home activities where Linux can play its part, making the most of our time at home, keeping active and engaged.

Linux Candy reveals the lighter side of Linux. Have some fun and escape from the daily drudgery.

Getting Started with Docker helps you master Docker, a set of platform as a service products that delivers software in packages called containers.

Best Free Android Apps. We showcase free Android apps that are definitely worth downloading. There’s a strict eligibility criteria for inclusion in this series.

These best free books accelerate your learning of every programming language. Learn a new language today!

These free tutorials offer the perfect tonic to our free programming books series.

Linux Around The World showcases usergroups that are relevant to Linux enthusiasts. Great ways to meet up with fellow enthusiasts.

Stars and Stripes is an occasional series looking at the impact of Linux in the USA.

Legalities and Compliance in Web Hosting Services

0
Legalities and Compliance in Web Hosting Services

Whether you’re a business or an individual offering services, your journey starts with web hosting services. You need a website to have an online presence, but when you control your brand’s online space, remember web hosting isn’t just about tech. Like any business deal, web hosting has legal rules to follow.

Legal Complexities in Web Hosting Services
Web hosting involves several legal matters, like possible copyright issues, content responsibility, data protection breaches, and privacy rule violations. Here’s more on these topics:

Copyright and Content Responsibility: Web hosting services could be responsible for the content they host. For example, if a client’s website shares illegal copies of content, uses copyrighted materials without permission, or breaks other intellectual property laws. Both the hosting service and the website owner could face legal trouble in these cases. This could lead to warnings, fines, or even the website being shut down.
Data Security: Online threats like hacking and phishing scams increase daily. So, web hosting providers must have strong data security. They may achieve this by using SSL protocols for secure file transfers, updating systems regularly, detecting intrusions, and training staff on online safety.
Privacy Laws: Web hosts must follow privacy laws like the General Data Protection Regulation (GDPR) or the US’s California Consumer Privacy Act (CCPA). What do these laws do? Companies must ask permission to gather personal information, let people see, change, or delete their data, and inform about any data leaks. You could face fines or legal issues if your company doesn’t follow these rules.

Compliance in Web Hosting Services
Understanding the legal side of web hosting might seem challenging, but following these essential steps can make web hosting simpler and legal:

In-Depth Research: Before choosing a hosting provider, look into their history. See if they’ve had any legal or compliance issues. You can also read client reviews and case studies to understand how committed they are to follow the law and industry standards.
Understanding the Vendor Contract: The vendor contract is a key part of your deal with your hosting provider. It explains their services, how you pay, and what each side must do. For hosting providers, you can also check the service level agreement (SLA) and Terms of Service (ToS). Get a legal expert to help you understand all parts, especially service interruptions, data breaches, and intellectual property rights. For example, a good vendor contract template will clearly say what happens and who is responsible if there is a data breach. Although vendor contracts might only be involved when you’re working directly with a hosting agency (usually local), not a public web hosting company. For  the public web hosting companies you’d need to check their legal pages like SLA and ToS.
Keep Up With Rules: Following laws isn’t a one-off job but an ongoing task. Laws change, so your attempts to follow them should too. Stay updated on changes in law – like rules about data protection or copyright – and regularly check that your website follows these. Your service provider should inform you about any changes affecting their service or how your website works.
Checking the Backup Policy: A good web host should have a solid backup policy that frequently saves and protects your data. This policy helps you retrieve your data if unforeseen events, such as a server crash or a cyber attack, occur. Your agreement should clearly explain the frequency of backups, their storage duration, the process to recover your data, and who bears responsibility and costs if issues arise.

These tips can make your website safer, improve your bond with your web host, and simplify your hosting experience. As you check for good uptime and customer service, think about these legal points when choosing a web host. This can help you make a smart choice.
Checklist for Hosting Legally
Web hosting providers can steer clear of unexpected legal issues by sticking to laws and keeping good ethics and professionalism. Here’s a simple checklist to help you navigate these challenges:
Understand The Legal Implications of Hosting Content:

Be aware of the potential legal issues that can arise from hosting copyrighted or harmful content.
Implement processes to promptly address and remove illegal content from your servers.

Implement Robust Data Security Measures:

Ensure safe ways of transferring files, such as employing SSL certificates.
Regularly update systems and software to thwart cyber threats.
Use reliable firewall and antivirus solutions to protect your servers and databases.
Plan for frequent vulnerability assessments and penetration tests to evaluate your security standards.

Educate Your Staff:

Teach your team about online dangers and offer regular staff training

Adhere to Privacy Laws:

Follow privacy laws, like GDPR in Europe or CCPA in the U.S.
Always ask for clear permission before gathering user data. Respect users’ right to see, change, or remove their data.
Make a detailed plan for responding to data breaches.

Have a Comprehensive Vendor Contract:

Have a legal practitioner help in drafting a detailed vendor contract template.
Ensure the template outlines the scope of services, payment terms, termination conditions, and liabilities.

Assure Regular Backups:

Create a consistent data backup plan to prevent data loss, and document this in the user agreement.

Dispute Resolution:

Establish a clear dispute resolution process to manage client or third parties’ disagreements.

Obtain Necessary Business Licenses and Permits:

Confirm that your hosting business has all required licenses, permits, and legal documentation for operation.

Transparent Terms of Service (TOS) and Acceptable Use Policy (AUP):

Draft clear and comprehensive TOS and AUP agreements.
Make sure they are easily accessible to the users and written in language that is simple to understand.

Have a Solid Exit Strategy:

Prepare an exit strategy that minimally disrupts your users if the business faces insolvency or must cease operations for other reasons.

Conclusion on web hosting legalities
Web hosting services are key for businesses and individuals to stay online. However, dealing with these services can involve legal details. It’s just as important to have a working and good-looking website as it is to handle these legal aspects.
Problems like copyright issues, being responsible for content, data security risks, and breaking privacy laws can come up in web hosting. Knowing and avoiding these possible problems will help make your website safe and trustworthy.

Cloud Hosting For Internet Of Things

0
Cloud Hosting For Internet Of Things

Cloud hosting for Internet of Things (IoT) is a game-changer for businesses and organizations of all sizes. IoT is the network of physical devices, vehicles, buildings, and other items embedded with sensors, software, and connectivity which enables these objects to collect and exchange data.The use of cloud hosting for IoT allows businesses to collect and store large amounts of data generated by IoT devices and then process and analyze the data to gain insights and make informed decisions. This is made possible by the scalability and flexibility of cloud hosting, which enables businesses to easily add or remove resources as needed to handle the growing amount of data generated by IoT devices.One of the biggest benefits of using cloud hosting for IoT is that it allows businesses to access and manage their IoT devices from anywhere, at any time. This is made possible by the remote access capabilities of cloud hosting, which enables businesses to connect to their IoT devices and data from any location with an internet connection.Another benefit of using cloud hosting for IoT is that it enables businesses to reduce their costs and improve their efficiency. This is because cloud hosting allows businesses to pay only for the resources they need, rather than having to invest in expensive hardware and infrastructure to support their IoT devices.In addition, cloud hosting for IoT also enables businesses to improve the security of their IoT devices and data. This is because cloud hosting providers typically have advanced security measures in place, such as encryption, authentication, and access controls, to protect the data and devices from unauthorized access and breaches.Overall, cloud hosting for IoT is a powerful solution that allows businesses to easily and securely manage their IoT devices and data, and gain valuable insights to improve their operations and make better decisions. As IoT technology continues to advance and more devices are connected, the need for cloud hosting will become even more important for businesses of all sizes.In conclusion, cloud hosting for IoT is an excellent solution for businesses of all sizes, especially those who are looking to gather insights, reduce costs, and improve efficiency and security. With its scalability, flexibility, and remote access capabilities, cloud hosting is a powerful tool for managing IoT devices and data, and unlocking valuable insights to improve operations and decision making.Related Post navigation

APT-CACHE and APT-GET commands for package management in Ubuntu

0
The Linux Juggernaut

IntroductionIn an earlier article, we demonstrated how you could use the dpkg package manager to install, remove and query information about software packages in the Ubuntu OS.In this article, we will show you how to use apt-cache to search for and query information about packages available in online and local repositories and we will also show you how to use apt-get to install and uninstall packages.Essentially apt-cache is the tool we use to query the apt software cache to obtain information about packages and apt-get is the tool we use for installing packages and modifying the state of packages installed on the system.All the examples demonstrated in this article were performed on an Ubuntu 16.04 system. APT-CACHE examples Example 1: List all available packagesTo list all packages available to be installed, we use the apt-cache pkgnames command as shown belowroot@linuxnix:~# apt-cache pkgnames
libdatrie-doc
libfstrcmp0-dbg
librime-data-sampheng
python-pyao-dbg
fonts-georgewilliams
python3-aptdaemon.test
libcollada2gltfconvert-dev
python3-doc8
r-bioc-hypergraph
angrydd
fonts-linuxlibertine
———output truncated for brevityNote that this command only shows the package names and no other information about the package.Example 2: Search for a packageTo search for a package use the apt-cache search command followed by the package name.Let’s search for the nano text editor.root@linuxnix:~# apt-cache search nano
nano – small, friendly text editor inspired by Pico
alpine-pico – Simple text editor from Alpine, a text-based email client
gwave – waveform viewer eg for spice simulators
kiki-the-nano-bot – 3D puzzle game, mixing Sokoban and Kula-World
kiki-the-nano-bot-data – Kiki the nano bot – game data
libaudio-moosic-perl – Moosic client library for Perl
————————————–output truncated for brevityThe apt-cache search command prints the package name along with a short one-line description of the package.This command performs a fuzzy match as it matches for the string being searched in the package names as well as package description.Therefore, we get a lot of results and most of them are not entirely accurate.Example 3: Query information about a packageTo obtain information about a package we use the apt-cache show command followed by the package name.Let’s view available information for the nano package.root@linuxnix:~# apt-cache show nano
Package: nano
Architecture: amd64
Version: 2.5.3-2ubuntu2
Priority: standard
Section: editors
Origin: Ubuntu
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Original-Maintainer: Jordi Mallach <jordi@debian.org>
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Installed-Size: 684
Provides: editor
Depends: libc6 (>= 2.14), libncursesw5 (>= 6), libtinfo5 (>= 6)
Suggests: spell
Conflicts: pico
Replaces: pico
Filename: pool/main/n/nano/nano_2.5.3-2ubuntu2_amd64.deb
Size: 190566
MD5sum: e31024f60c11f615be8c3abf86af8cd9
SHA1: b2044c27b55e81306128822027d0277ae03a5487
SHA256: 1c0ce9033e272743d4037a063c46b011f73efbbd38932bc3a351d5bc471d1a5e
Homepage: http://www.nano-editor.org/
Description-en: small, friendly text editor inspired by Pico
GNU nano is an easy-to-use text editor originally designed as a replacement
for Pico, the ncurses-based editor from the non-free mailer package Pine
(itself now available under the Apache License as Alpine).
.
However, nano also implements many features missing in pico, including:
– feature toggles;
– interactive search and replace (with regular expression support);
– go to line (and column) command;
– auto-indentation and color syntax-highlighting;
– filename tab-completion and support for multiple buffers;
– full internationalization support.
Description-md5: b7e1d8c3d831118724cfe8ea3996b595
Task: standard, ubuntu-touch-core, ubuntu-touch
Supported: 5y

Package: nano
Priority: standard
Section: editors
Installed-Size: 684
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Original-Maintainer: Jordi Mallach <jordi@debian.org>
Architecture: amd64
Version: 2.5.3-2
Replaces: pico
Provides: editor
Depends: libc6 (>= 2.14), libncursesw5 (>= 6), libtinfo5 (>= 6)
Suggests: spell
Conflicts: pico
Filename: pool/main/n/nano/nano_2.5.3-2_amd64.deb
Size: 190920
MD5sum: bd757bcdb6ffced902490ca2803c8e15
SHA1: 5baa89fa02ef15f26ce09fdbfd1d0be61349b7d7
SHA256: 2a61014111de157e6ce05f9e970563719872d5908f8bef43b3576228b2e0af0a
Description-en: small, friendly text editor inspired by Pico
GNU nano is an easy-to-use text editor originally designed as a replacement
for Pico, the ncurses-based editor from the non-free mailer package Pine
(itself now available under the Apache License as Alpine).
.
However, nano also implements many features missing in pico, including:
– feature toggles;
– interactive search and replace (with regular expression support);
– go to line (and column) command;
– auto-indentation and color syntax-highlighting;
– filename tab-completion and support for multiple buffers;
– full internationalization support.
Description-md5: b7e1d8c3d831118724cfe8ea3996b595
Homepage: http://www.nano-editor.org/
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Origin: Ubuntu
Supported: 5y
Task: standard, ubuntu-touch-core, ubuntu-touchThis shows a lot of useful information about the package like the available versions, the .deb package file name, dependencies and conflicts and also a package description.Example 4: Get cache statisticsTo check the total number of packages available, we the apt-cache stats command.root@linuxnix:~# apt-cache stats
Total package names: 70538 (1,411 k)
Total package structures: 70546 (3,104 k)
Normal packages: 55326
Pure virtual packages: 1146
Single virtual packages: 4764
Mixed virtual packages: 474
Missing: 8836
Total distinct versions: 62726 (5,018 k)
Total distinct descriptions: 118584 (2,846 k)
Total dependencies: 369801/98006 (8,964 k)
Total ver/file relations: 1482 (35.6 k)
Total Desc/File relations: 53048 (1,273 k)
Total Provides mappings: 12818 (308 k)
Total globbed strings: 157526 (3,489 k)
Total slack space: 16.4 k
Total space accounted for: 26.9 M
Total buckets in PkgHashTable: 50503
Unused: 12488
Used: 38015
Utilization: 75.2728%
Average entries: 1.85574
Longest: 8
Shortest: 1
Total buckets in GrpHashTable: 50503
Unused: 12488
Used: 38015
Utilization: 75.2728%
Average entries: 1.85553
Longest: 8
Shortest: 1
root@linuxnix:~# APT-GET examples Example 5: Update ubuntu package listsThe apt-get update command downloads the package lists from the repositories and “updates” them to get information on the newest versions of packages and their dependencies.This command is used to re-synchronize the package index files from their sources.The indexes of available packages are fetched from the location(s) specified in /etc/apt/sources.listLet’s execute it.root@linuxnix:~# apt-get update
Hit:1 http://ap-southeast-1.ec2.archive.ubuntu.com/ubuntu xenial InRelease
Hit:2 http://ap-southeast-1.ec2.archive.ubuntu.com/ubuntu xenial-updates InRelease
Hit:3 http://ap-southeast-1.ec2.archive.ubuntu.com/ubuntu xenial-backports InRelease
Get:4 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Fetched 102 kB in 1s (77.1 kB/s)
Reading package lists… Done
root@linuxnix:~#Example 6: Update the operating systemThe apt-get upgrade command is used to upgrade all the currently installed software packages on the system.the apt-get upgrade command does not remove any currently installed package.root@linuxnix:~# apt-get upgrade
Reading package lists… Done
Building dependency tree
Reading state information… Done
Calculating upgrade… Done
The following packages were automatically installed and are no longer required:
linux-aws-headers-4.4.0-1032 linux-headers-4.4.0-1032-aws linux-image-4.4.0-1032-aws
Use ‘apt autoremove’ to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@linuxnix:~#If you want to upgrade, unconcerned about whether software packages will be added or removed to fulfill dependencies, use the apt-get dist-upgrade command.root@linuxnix:~# apt-get dist-upgrade
Reading package lists… Done
Building dependency tree
Reading state information… Done
Calculating upgrade… Done
The following packages were automatically installed and are no longer required:
linux-aws-headers-4.4.0-1032 linux-headers-4.4.0-1032-aws linux-image-4.4.0-1032-aws
Use ‘apt autoremove’ to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@linuxnix:~#The system that I’m working on has already been updated with the latest available package updates and that’s why the upgrade commands completed with zero package updates.Example 7: Install a packageTo install a package we use the apt-get install command followed by the package name.For demonstration purpose, let’s install the nmap package.root@linuxnix:~# apt-get install nmap
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following packages were automatically installed and are no longer required:
linux-aws-headers-4.4.0-1032 linux-headers-4.4.0-1032-aws linux-image-4.4.0-1032-aws
Use ‘apt autoremove’ to remove them.
The following additional packages will be installed:
libblas-common libblas3 liblinear3 liblua5.2-0 libxslt1.1 lua-lpeg ndiff python-bs4 python-chardet python-html5lib python-lxml
python-pkg-resources python-six
Suggested packages:
liblinear-tools liblinear-dev python-genshi python-lxml-dbg python-lxml-doc python-setuptools
The following NEW packages will be installed:
libblas-common libblas3 liblinear3 liblua5.2-0 libxslt1.1 lua-lpeg ndiff nmap python-bs4 python-chardet python-html5lib
python-lxml python-pkg-resources python-six
0 upgraded, 14 newly installed, 0 to remove and 0 not upgraded.
Need to get 6,312 kB of archives.
After this operation, 28.1 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
——————————output truncated for brevityWhen we type in apt-get install, the apt-get tool first goes through the package lists to find out which repository has the package we are trying to install.It then gathers information about the package dependencies in the steps step.Once the required package information is available, the package and any dependencies are downloaded from the appropriate repository and installed on the system.Example 8: Install packages using wildcardsWe can use * wildcard to install packages that match a given package name.In the below example we attempt to install all packages that have the name git.root@linuxnix:~# apt-get install git*
Reading package lists… Done
Building dependency tree
Reading state information… Done
Note, selecting ‘git-sh’ for glob ‘git*’
Note, selecting ‘gitolite’ for glob ‘git*’
Note, selecting ‘git-ftp’ for glob ‘git*’
Note, selecting ‘git-big-picture’ for glob ‘git*’
Note, selecting ‘git-gui’ for glob ‘git*’
Note, selecting ‘gitlab-shell’ for glob ‘git*’
Note, selecting ‘git-hub’ for glob ‘git*’
——————————output truncated for brevityExample 9: Install multiple packages in a single commandTo install more than one package in a single apt-get install command just specify the additional package names separated by whitespace.For example:apt-get install <package1> <package2>Example 10: Install a specific version of a packageTo install a specific version of a package, use the syntax apt-get install <package name>=<required version>.You may obtain the available versions of a package using the apt-cache show <package name> command.In the below example, we try to install a specific version of the nano package.root@linuxnix:~# apt-get install nano=2.5.3-2
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following packages were automatically installed and are no longer required:
linux-aws-headers-4.4.0-1032 linux-headers-4.4.0-1032-aws linux-image-4.4.0-1032-aws
Use ‘apt autoremove’ to remove them.
Suggested packages:
spell
The following packages will be DOWNGRADED:
nano
0 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 0 not upgraded.
Need to get 191 kB of archives.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n]Since a higher version of the package was already installed, apt-get would’ve performed a downgrade of the package if we had gone ahead with the install.Example 11: Remove a packageTo remove a package we use the apt-get remove command followed by the package name.Let’s remove the nmap package that we had installed earlier.root@linuxnix:~# apt-get remove nmap
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following packages were automatically installed and are no longer required:
libblas-common libblas3 liblinear3 liblua5.2-0 libxslt1.1 linux-aws-headers-4.4.0-1032 linux-headers-4.4.0-1032-aws
linux-image-4.4.0-1032-aws lua-lpeg ndiff python-bs4 python-chardet python-html5lib python-lxml python-pkg-resources python-six
Use ‘apt autoremove’ to remove them.
The following packages will be REMOVED:
nmap
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 21.3 MB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database … 108793 files and directories currently installed.)
Removing nmap (7.01-2ubuntu2) …
Processing triggers for man-db (2.7.5-1) …Example 12: Download a package without installationTo download a package without installing use the apt-get download command followed by the package name.We’ll download the nmap package to demonstrate.root@linuxnix:~# apt-get download nmap
Get:1 http://ap-southeast-1.ec2.archive.ubuntu.com/ubuntu xenial/main amd64 nmap amd64 7.01-2ubuntu2 [4,638 kB]
Fetched 4,638 kB in 0s (43.7 MB/s)This downloads the package in the current working directory.root@linuxnix:~# ls
nmap_7.01-2ubuntu2_amd64.debNote that this does not download the package dependencies.Example 13: View the changelog for a packageThe developers mention changes and updates made to a package throughout its history in the package changelog.To view the changelog for a package we use the apt-get changelog command followed by the package name.root@linuxnix:~# apt-get changelog nmap
Get:1 http://changelogs.ubuntu.com nmap 7.01-2ubuntu2 Changelog [27.8 kB]
Fetched 27.8 kB in 0s (35.4 kB/s)
nmap (7.01-2ubuntu2) xenial; urgency=medium
* Revert the last change; no changes left.
— Matthias Klose <doko@ubuntu.com> Thu, 31 Mar 2016 13:36:38 +0200
nmap (7.01-2ubuntu1) xenial; urgency=medium
* Configure –without-liblua for a first build.
— Matthias Klose <doko@ubuntu.com> Mon, 22 Feb 2016 16:48:47 +0100Example 14: Check for broken dependenciesThe apt-get check command checks for any dependency issues prevalent in the system.root@linuxnix:~# apt-get check
Reading package lists… Done
Building dependency tree
Reading state information… Done
root@linuxnix:~#Example 15: Clean the apt-get cache directoryWhen we install packages using apt-get, the utility keeps a local copy of the .deb file for the packages being installed in the /var/cache/apt/archives.The size of this directory could become significant over time and therefore it makes sense to periodically clean this directory.The apt-get autoclean command performs this task for us.Given below is a practical demonstrationroot@linuxnix:~# apt-get autoclean
Reading package lists… Done
Building dependency tree
Reading state information… Done
root@linuxnix:~#ConclusionIn this article, we demonstrated the most frequently used features and options of the apt-cache and apt-get utilities.In our next article, we’ll explain how to use the apt utility which is a combination of the features provided by apt-cache and apt-get. Post Views: 1,749The following two tabs change content below.He started his career in IT in 2011 as a system administrator. He has since worked with HP-UX, Solaris and Linux operating systems along with exposure to high availability and virtualization solutions. He has a keen interest in shell, Python and Perl scripting and is learning the ropes on AWS cloud, DevOps tools, and methodologies. He enjoys sharing the knowledge he’s gained over the years with the rest of the community.