The initialization daemon systemd has become the de facto standard in modern Linux systems and is already used in many popular distributions: Debian, RHEL/CentOS, and Ubuntu (as of ver. 15.04). Compared to the traditional syslog, systemd offers an entirely different approach to logging.
At its base you’ll find centralization; the journal component collects all of the system messages (messages from the kernel and from various services and applications). In this case, there’s no need to configure log distribution, instead the applications can just write to stdout and stderr, and journal automatically saves these messages. This setup is possible with Upstart, but Upstart saves all logs to a separate file, whereas systemd saves them in a binary base, greatly simplifying systemization and searches. Read more
Back in 2014, the best (if not only) option for patching the Linux kernel without rebooting was KernelCare, a tool developed by our partners at Cloud Linux.
The situation has since changed quite a bit as live patching has officially been included in the kernel as of version 4.0. The tools kpatch and kGraft, which were still in development in 2014, have also been massively improved. Kpatch was even added to the official repository and in Ubutnu 16.04, it can be installed from the default package manager. Canonical has also recently released their Canonical Livepatch Service, which can be used to patch the Ubuntu kernel without rebooting.
Our company has been working on open-source projects for over five years now. We registered on GitHub in May 2011 and have already published around 30 repositories. Even though we’ve mentioned some of our projects in older posts, we’d like to take today to review a few of these.
Linux network stack performance has become increasingly relevant over the past few years. This is perfectly understandable: the amount of data that can be transferred over a network and the corresponding workload has been growing not by the day, but by the hour.
Not even the widespread use of 10 GE network cards has resolved this issue; this is because a lot of bottlenecks that prevent packets from being quickly processed are found in the Linux kernel itself.
Today, we’re going to talk about a unique easy-to-use tool that makes using our Cloud Storage even easier. Meet rclone. The developers have described it as “rsync for cloud storage”, and this says a lot.
In addition to writing code, a lot needs to be done before a program can be launched. This can be very time consuming. It might not seem like much—combine everything written by different developers, create an installer, prepare the documentation—but most programmers can’t imagine just how much time these fairly routine operations actually take. It’s not that uncommon for an entire team to rush to get their work done, but that only creates more errors and issues. These problems take time to resolve, and inevitably, the product’s release gets pushed back to TBA.
Workloads need to be assessed and addressed when a project is first being development. This is the best way to avoid dropped servers and ensuing losses, both reputational and material. Even though we can increase a server’s power and optimize its algorithms and code to accommodate growing loads, sooner or later, these methods just won’t cut it.
Today, we’ll be talking about hosting static sites in Cloud Storage. More specifically, we’ll be looking at how they can be set up and optimized.
From a user’s point of view, one of the most important criteria for a site is its load time. If a site takes too long to load for one reason or another, it will lose visitors who simply don’t want to wait. To increase a site’s speed, optimization must occur.
Below are some of our tips for optimizing static sites in Cloud Storage and decreasing load times.
Web projects rely heavily on their Internet connection; no online service today can function normally without good bandwidth. Overlooking the speed and quality of your Internet connection can lead to serious consequences: loss of users, damaged reputation, lack of revenue, etc.
As you know, there are two main kinds of bandwidth: guaranteed and non-guaranteed (a.k.a. shared). Let’s take a closer look at each of these.