What’s New With Linux Server in 2022?

Linux Server

Despite the fact that there are hundreds of distributions of Linux, choosing the best one for your needs depends largely on what your particular use case is. For instance, if you are deploying a server, you may want to consider using CentOS.

This is a server distribution that is highly optimized for running enterprise applications. It is also known to be very stable, which is one of the reasons why many administrators choose it. You may also want to consider Ubuntu. It is the most popular server distribution. You can also try out SuSe, Debian, and Alma Linux. Each of these Linux distributions has a variety of useful features and is a good choice for a variety of use cases.

CentOS is one of the most popular distributions and is very popular among administrators.

It is a forked version of Debian. It has a large community and many tutorials and articles to help administrators learn about the operating system. Despite being popular, CentOS has recently switched directions and is no longer a pure drop-in replacement for Red Hat Enterprise Linux.

CentOS has also recently released version 9 of its distribution, which is also based on Debian. This release brings several changes to the Linux operating system, including support for live kernel patching, improved podman engine, SELinux policy changes, improved subsystem metrics, and enhanced performance metrics page. There is also a Cockpit web console for monitoring the health of your physical or virtual Linux server. You can also use the Cockpit to get an idea of what resource spikes may be causing issues.

RHEL 9 provides support for SELinux, an advanced security policy that leverages the full kernel and allows you to run your system as a container built from Red Hat Universal Base Images. You can also use live kernel patching without disrupting your production environment. You can also enable an Information Barrier feature, which will prevent users from sharing sensitive data unless you explicitly allow them to do so.

OpenSSL 3.0.1 also offers improved support for HTTPS, a new versioning scheme, and improved cryptographic policies.

It is also useful for hosting web applications. You can programmatically invoke providers based on application requirements. You can also enable container short-name validation, which is another feature that will make your life easier.

You can also test containerized applications on an out-of-box RHEL 9 configuration. You can also use the JuJu tool to spin up a Kubernetes deployment in a matter of seconds.

The Red Hat Enterprise Linux web console is also improved.

The Cockpit web console is available in both virtual and physical Linux systems and offers a variety of features, such as performance metrics, network statistics, and system metrics. You can also use the Cockpit graphical interface to create custom metrics. In addition, the web console provides an enhanced performance metrics page. The Cockpit also supports live kernel patching, which allows you to apply critical kernel patches without disruptions.

OpenSSL 3.0.1 also brings the “t” to the “m” with a new versioning scheme and a new provider concept. It also adds new security ciphers, improved HTTPS, and new cryptographic policies.

GitHub launches sponsored code repositories

GitHub launches sponsored code repositories

Specializing in open source code repositories, the GitHub platform now offers a feature for developers to sponsor directories. Financial support from partners is just beginning.

If open source doesn’t automatically mean free – far from it – it can also rhyme with sponsorship.

The famous source code repository GitHub, now part of the Microsoft group, has indeed taken its Sponsors feature up a notch. Until now, it gave users the possibility to support others and added the ability for organizations to create and receive sponsorships. Now the company is taking it a step further with the launch of sponsor-only repositories, a feature for developers to interact more effectively with sponsors.

Specifically, developers and companies will now be able to attach a private repository to each of their sponsorship levels. This will allow sponsors to access the repository. Note that these invitations are automatically managed by GitHub and therefore require no manual processing.

The features offered are varied and include Sponsorware (access to projects for your sponsors only), Discussions (communicate with sponsors via messages and report issues) and Early Access (preview code before it is open source). In addition, the platform has added support for custom sponsorship amounts. “You now have more control and can set a minimum custom amount for your referrals in your tier settings.

Also transaction exports will now give you the location and VAT information that many of you need for sales tax calculations,” GitHib says. “You can now write a custom message that will display for each new referrer after creating their referral. This is a great way to welcome your new sponsors and give them more information about how you manage your sponsored projects.”

Pushing the sponsorship slider even further

GithHub also now gives the ability to add metadata to the URLs of a sponsored page to see what brings in new sponsors. For example, the user can include specific metadata in a URL used when tweeting about development work in progress. It is also proposed to display the metadata collected in the transaction export.

GitHub doesn’t plan to stop there: “The next chapter of GitHub Sponsors will open the door for more companies to support the open source projects they depend on. We’re partnering with more companies every week to enhance our beta program,” the platform says. “We’ve also heard that it’s difficult to find projects to sponsor, which affects both sponsors and maintainers.

Stay tuned for future work to improve the discovery experience on GitHub, making it easier for the community to explore dependencies and decide who to support, and helping maintainers who use sponsors grow their audience, community and overall funding.”

Log4j flaw: open source is not the problem

Log4j flaw

At a hearing before a U.S. Senate committee, executives from Cisco, Palo Alto and Apache discussed the industry’s response to the Log4j vulnerability and potential future problems. They were united in refusing to cast aspersions on open source.

After the White House, the U.S. Senate is now questioning the long-term impact of the serious vulnerability discovered late last year in the open source software Apache Log4j. “Open source is not the problem,” said Dr. Trey Herr, director of the Cyber Statecraft Initiative at the U.S. international relations think tank Atlantic Council, at a hearing of the U.S. Senate Committee on Homeland Security & Government Affairs this week. “Software supply chain security issues have been a concern for the cybersecurity community for years,” he said.

Experts say it will take a long time and a lot of work to address the Log4j flaw and its impact. As such, security researchers at Cisco Talos, believe that in the future, Log4j will be heavily exploited, and users should apply patches to affected products and implement mitigation solutions without delay. Java logging software is widely used in services, websites, and enterprise and consumer applications, as it is an easy-to-use tool in client/server application development.

A defense of open source

If exploited, the Log4j flaw gives an unauthenticated remote actor the ability to take control of an affected server system and gain access to corporate information or launch a denial-of-service attack. The Senate committee asked experts to outline industry responses and ways to prevent future software exposures.

Because the Logj4 flaw affects open source software, experts have spent a lot of time advocating for the use of open source software in critical platforms. “The Log4j vulnerability, which can be exploited by typing just 12 characters, is just one example of the serious threat that widespread software vulnerabilities, including those in open source code, or freely available code developed by individuals, can pose to national and economic security,” said committee chairman Senator Gary Peters (D-MI).

“In terms of the amount of online services, sites and devices exposed, the potential impact of this software vulnerability is immeasurable, and it puts all of our critical infrastructure, from banks and power grids, to government agencies, at risk of network breaches,” the senator added.

Cisco security chief Brad Arkin wanted to defend open source software. “I don’t think open source software is at fault, as some have suggested, and it would be wrong to suggest that the Log4j vulnerability is evidence of a unique flaw or that open source software poses an increased risk,” Brad Arkin, Cisco’s senior vice president and chief security officer, told the committee.

“The truth is that all software contains vulnerabilities due to human design, integration and writing errors,” he further argued. “Cisco is a significant user and active contributor to open source security projects. These efforts are essential and necessary to maintain the integrity of shared blocks of code across fundamental elements of the IT infrastructure,” Arkin said. “However, focusing exclusively on the risks posed by open source software could distract us from other important areas where we can address the security risks inherent in all software,” added Cisco’s senior vice president and chief security officer.

Log4j flaw: open source is not the problem

At a hearing before a U.S. Senate committee, executives from Cisco, Palo Alto and Apache discussed the industry’s response to the Log4j vulnerability and potential future problems. They were united in refusing to cast aspersions on open source.

After the White House, the U.S. Senate is now questioning the long-term impact of the serious vulnerability discovered late last year in the open source software Apache Log4j. “Open source is not the problem,” said Dr. Trey Herr, director of the Cyber Statecraft Initiative at the U.S. international relations think tank Atlantic Council, at a hearing of the U.S. Senate Committee on Homeland Security & Government Affairs this week. “Software supply chain security issues have been a concern for the cybersecurity community for years,” he said.

Experts say it will take a long time and a lot of work to address the Log4j flaw and its impact. As such, security researchers at Cisco Talos, believe that in the future, Log4j will be heavily exploited, and users should apply patches to affected products and implement mitigation solutions without delay. Java logging software is widely used in services, websites, and enterprise and consumer applications, as it is an easy-to-use tool in client/server application development.

A defense of open source

If exploited, the Log4j flaw gives an unauthenticated remote actor the ability to take control of an affected server system and gain access to corporate information or launch a denial-of-service attack. The Senate committee asked experts to outline industry responses and ways to prevent future software exposures.

Because the Logj4 flaw affects open source software, experts have spent a lot of time advocating for the use of open source software in critical platforms. “The Log4j vulnerability, which can be exploited by typing just 12 characters, is just one example of the serious threat that widespread software vulnerabilities, including those in open source code, or freely available code developed by individuals, can pose to national and economic security,” said committee chairman Senator Gary Peters (D-MI).

“In terms of the amount of online services, sites and devices exposed, the potential impact of this software vulnerability is immeasurable, and it puts all of our critical infrastructure, from banks and power grids, to government agencies, at risk of network breaches,” the senator added.

Cisco security chief Brad Arkin wanted to defend open source software. “I don’t think open source software is at fault, as some have suggested, and it would be wrong to suggest that the Log4j vulnerability is evidence of a unique flaw or that open source software poses an increased risk,” Brad Arkin, Cisco’s senior vice president and chief security officer, told the committee.

“The truth is that all software contains vulnerabilities due to human design, integration and writing errors,” he further argued. “Cisco is a significant user and active contributor to open source security projects. These efforts are essential and necessary to maintain the integrity of shared blocks of code across fundamental elements of the IT infrastructure,” Arkin said. “However, focusing exclusively on the risks posed by open source software could distract us from other important areas where we can address the security risks inherent in all software,” added Cisco’s senior vice president and chief security officer.

Taking the long view and the means to remediate

According to Dr. Herr of the U.S. think tank Atlantic Council, expect to discover more similar vulnerabilities. “The Log4j logging program is extremely popular, and fixing its flaws has required considerable effort and widespread public attention, but this is not the last time this type of incident will occur,” Herr said. “Among the efforts that federal agencies should undertake to improve open source security, would be to fund what is ordinary, providing resources where industry would not

A vulnerability found in the Snap package manager for Linux

Snap package manager for Linux

Discovered in the Snap package manager for Linux systems developed by Canonical, a flaw exposes users to privilege escalation. A risk that can lead to root access.

Researchers have discovered an easy-to-exploit vulnerability in the Snap universal application packaging and distribution system, developed for Ubuntu, but available on multiple Linux distributions. The flaw allows a low-privileged user to execute malicious code with root privileges, in other words, those of the highest administrative account in Linux.

This vulnerability, which carries the reference CVE-2021-44731, is one of the many flaws discovered in various Linux components by researchers from the security company Qualys during their research on Snap security. This latest vulnerability, like another vulnerability with the reference CVE-2021-44730, is located in snap-confine, the tool used to set up the sandboxes in which Snap applications run.

Snap is a package manager for Linux systems developed by Canonical, the company behind the Ubuntu desktop and server distribution. It allows the packaging and distribution of autonomous applications called “snaps” that run in a restricted container, offering a configurable security level. Because they are self-contained, Snap applications have no external dependencies, allowing them to run on multiple platforms or distributions.

In general, each major Linux distribution maintains its own pre-packaged software repository and software manager, e.g. DEB for Debian, PPA for Ubuntu, RPM for Fedora and Red Hat, Pacman for Arch Linux, and so on. All these systems get the desired package and all other dependencies as separate packages. On the other hand, snaps applications come with all necessary dependencies, making them universally deployable on all Linux systems that have the Snap service.

Extensive security audit already conducted

The Snap Manager is shipped by default on Ubuntu and several Linux distributions and is available as an option in many others, including the major ones. It is used to distribute not only desktop applications, but also cloud and IoT applications. Snap containment – the isolation feature – has three levels of security, with Strict mode being used by most applications. In this mode, applications must request permission to access files, other processes or the network. This mode of operation is reminiscent of the application sandboxing and permissions model of mobile operating systems like Android. Since application sandboxing is one of Snap’s main features, any elevation of privilege vulnerability that allows users to escape this isolation and take control of the host system is therefore considered critical.

Qualys researchers have named their two snap-confine vulnerabilities “Oh Snap! More Lemmings,” because they were discovered after another elevation of privilege flaw identified in 2019 called Dirty Sock. Since Dirty Sock, Snap has undergone a thorough security audit by SuSE’s security team, and in general, the handler is programmed very defensively, using many kernel security features such as AppArmor profiles, seccomp filters and mount point namespaces. “We almost gave up on our audit after a few days,” Qualys researchers said in their advisory, adding that “discovering and exploiting a vulnerability in snap-confine was extremely difficult (especially in a default Ubuntu installation).”

Other bugs also discovered

Nevertheless, the team decided to continue its audit after finding some minor bugs. This is how they ended up discovering the two privilege escalation vulnerabilities CVE-2021-44730 and CVE-2021-44731. CVE-2021-44730 allows a so-called “hardlink attack”, exploitable only in default configurations, when the kernel parameter fs.protected_hardlinks is equal to 0.

As for the CVE-2021-44731 vulnerability, it creates a race condition that can be exploited in the default installations of Ubuntu Desktop and the default installations of Ubuntu Server. And this race condition opens a lot of possibilities: Within the snap mount namespace (which can be accessed by snap-confine itself), it becomes possible to mount a non-sticky directory where anyone can write to /tmp, or mount any other part of the file system to /tmp,” explained the Qualys researchers. “This race condition can be reliably reversed by monitoring /tmp/snap.lxd with inotify, placing the exploit and snap-confine on the same processor with sched_setaffinity(), and lowering the scheduling priority of snap-confine with setpriority() and sched_setscheduler(),” the researchers further explained.

In their examination of these flaws, Qualys researchers also discovered bugs in other libraries and related components used by Snap : including unauthorized disassembly in libmount of util-linux (CVE-2021-3996 and CVE-2021-3995); unexpected return value of realpath() of glibc (CVE-2021-3998); advanced off-by-one buffer overflow/underflow in getcwd() of glibc (CVE-2021-3999); uncontrolled recursion in systemd-tmpfiles of systemd (CVE-2021-3997).

These flaws were patched in these respective components earlier this year. Ubuntu has released patches for CVE-2021-44731 and CVE-2021-44730 for most of its Linux editions, with the exception of the 16.04 ESM (Extended Security Maintenance) flaw still awaiting a patch. The severity of these two vulnerabilities is considered as very critical.

War in Ukraine: semiconductor manufacturing may be affected

ukraine war

The war in Ukraine led by Russia could create shortages of neon. This noble gas is one of those used in the manufacture of semiconductors. In 2022, Ukraine will supply 70% of the world’s neon.

According to TrendForce, a Taiwanese research firm, the Russian invasion of Ukraine could exacerbate the global semiconductor shortage.

Neon shortage expected due to war in Ukraine?

Today, Ukraine supplies 70% of the world’s neon. This noble gas, the second lightest in the world, is one of the rare gases used to manufacture semiconductors. This noble gas is mainly used in the lithography stages of semiconductor production. The war in Ukraine led by Russia could create neon shortages.

Analysts say that chipmakers are always one step ahead, but depending on how long the war lasts, semiconductor production could well be affected. In the short term, global semiconductor production lines are not interrupted.

However, the reduction in gas supply will bring supply and demand into play, which means that prices are likely to increase, and those increases will likely be passed on to consumers…

Another analyst firm, Techcet, points out that Russia is also a major supplier of neon to the world and that the country also produces a lot of palladium, a metal that is essential for making catalytic converters and many electronic components. Sanctions imposed by NATO members against Russia may cause suppliers to seek alternative sources of supply.

The global supply chain is still very fragile

In the long term, this war may actually increase the shortage of semiconductors. Indeed, Russia’s invasion of Ukraine comes at a time when demand for chips has been rising across the board throughout the Covid-19 pandemic.

On the enterprise side, demand for chips specializing in artificial intelligence is expected to grow by more than 50% per year over the next few years.

The numerous investments announced, such as Intel’s intention to build a huge semiconductor production site in Ohio for $20 billion, the $52 billion announced by the United States or the European Commission’s €43 billion plan, may not be enough.

Gina M. Raimondo, U.S. Secretary of Commerce, believes that “the semiconductor supply chain remains fragile and it is critical that Congress act quickly to pass the $52 billion in chip funding proposed by the President as soon as possible.”

In the U.S., the semiconductor inventory has gone from 40 days ahead in 2019 to less than 5 days ahead in 2022. Automobiles, medical devices, and energy management equipment are the most chip-intensive areas. A new neon supply problem due to the war in Ukraine could have a significant impact on the shortage.

The smartphone market reached $450 billion in 2021

smartphone market data

A record figure for a market dominated by Apple and the successful launch of the iPhone 13.

Counterpoint Research, a firm specializing in the study of technology markets, has published a report outlining the state of the smartphone market in 2021. Despite the pandemic and the shortage of electronic components, the sector has achieved the best performance in its history.

Average smartphone price increased in 2021

In fact, the global smartphone market revenue crossed the record mark of $448 billion in 2021, according to the latest study by Counterpoint’s Market Monitor service. This is a 7% increase from the previous year. The average selling price of smartphones has also increased by 12% compared to 2020 to reach $322.

One reason for this trend is the increasing number of smartphones supporting 5G being deployed on the market. Logically, their price is higher than that of devices supporting only 4G. 5G-enabled smartphones accounted for more than 40% of global shipments in 2021, up from 18% in 2020.

As Counterpoint Research explains, demand for high-end smartphones has also been growing over the past year. This is a direct result of the Covid-19 pandemic, as users have been looking for a better experience for education, entertainment or even work from home. The shortage of semiconductors is also impacting the price of smartphones as some manufacturers have increased the price of their devices in order to cope with it.

Apple dominates the smartphone market

Unsurprisingly, Apple dominated the market in 2021 with the very successful launch of its iPhone 13 range. The Apple brand saw its iPhone-related revenue increase by 35% in one year to $196 billion. In 2021, the iPhone accounted for 44% of total global smartphone revenue.

The Cupertino-based firm is followed by Samsung, whose revenue grew 11% from 2020 to 2021. In addition to launching two folding smartphones, Samsung has managed to increase its global market share in the mid- and high-end segments with the launch of the flagship Galaxy S series.

It is Xiaomi that occupies the third place with a considerable increase in revenue of 49%. This is due in part to the popularity of Xiaomi devices in India, the firm’s largest market, as well as increased shipments and market share of its mid-range and high-end smartphones, such as the Mi 11x series.

The two manufacturers behind Xiaomi are also Chinese. They are OPPO and vivo, which have seen their revenues increase by 47% and 43% respectively. It should be noted that Huawei, once the best seller of smartphones in the world, is not among the top five manufacturers, as a result of the U.S. sanctions against it, which have greatly affected it.

BMW unveils a robot painter that performs feats on car bodies

bmw new robot technology

The German automotive brand BMW has developed a robotic painting process capable of performing custom body painting that usually requires extensive preparation.

Robotics is widely used in the automotive industry, especially for body painting. While robot painters are capable of working faster than a human, they lack the ability to perform custom paint jobs involving different patterns and colors.

But BMW has just made a promising breakthrough with its new EcoPaintJet Pro robot, which can paint entire car bodies with complex multi-color patterns.
Normally, a custom paint job requires many steps with a lot of masking work in order to juxtapose the shades.

BMW’s EcoPaintJet Pro robot uses a process similar to an inkjet printer. With a conventional robot painter, the paint is sprayed through a nozzle that rotates at 35 to 55,000 revolutions per minute. The paint adheres electrostatically.

The EcoPaintJet Pro uses half-millimeter thick jets sprayed through an orifice plate. This system produces highly accurate painted edges and creates intricate designs with color transitions as clean as if masking or stenciling had been used.

Less paint and energy wasted

The robot was tested at BMW’s Dingolfing plant in Munich on nineteen BMW M4s with a two-tone finish featuring M4 branding on the hood and tailgate. Eventually, BMW wants to expand the use of EcoPaintJet Pro to offer customers more affordable customization options.

The German automaker also points to the fact that the precision of its process avoids the excess paint usually seen in paint booths that must be cleaned up with chemicals. BMW assures that the EcoPaintJet Pro will lower energy consumption by reducing the amount of air needed for booth ventilation. This new robotic painting process will be introduced on BMW’s assembly lines starting in 2022.

The first Internet site in history is still accessible

first website ever created

Created by Cern, the very first Internet site was put online at the beginning of August 1991 on another historical piece of equipment: a computer of the NeXT brand which was worth a real fortune at the time.

It is more than rudimentary, devoid of illustrations and content to give access to 25 links to other pages. It is the very first page of the Web put online more than 30 years ago, on August 6, 1991. It is the Cern, the European Organization for Nuclear Research, which is at the origin of this page named simply World Wide Web. It is the origin of what we all know now.

This page was created by Tim Berners-Lee who is precisely considered as the inventor of the Web. The idea was to refer through hyperlinks to a vast universe of documents, as specified on this original page which is still active today and which can be consulted via this link. You can find everything related to the history of the project, how to use the Web and how to navigate it.

But before this page was published, the inventor had previously developed the basic management system of the Web, as well as the http protocol. It was only three years later that it was activated. And inevitably, at the time, it remained rather confidential.

The NeXT Computer, the computer that gave life to the Web

For the little anecdote in this story, the inventor of the Web created this universe from a computer of the brand NeXT. A company founded by a certain Steve Jobs after he was forced to resign by Apple. Powerful and designed to be used by researchers and companies, NeXT computers cost a fortune.

For the NeXT Computer, the first opus, released in 1988 and which was sitting on Tim Berners-Lee’s desk, you had to pay 6,500 dollars at the time, which today is the equivalent of 15,000 dollars (about 13,915 euros). Thirty years after this computer inaugurated the Web, on August 6, 2021, there were 1.88 billion Web sites, according to the Internet Live Stats counter and among them, this very first Web site.

Glimpse Image Editor: free alternative to Photoshop

The open-source photo editing software GIMP has received a new fork called Glimpse Image Editor. Still free, the latest software, which is opposed to Adobe Photoshop, should have a more pleasant and accessible interface.

Better known under the name of GIMP, the GNU Image Manipulation Program project had the ambition to propose an open-source and free solution to retouch a photograph. Today, a new fork – a new branch of independent development – of GIMP has been started under the name Glimpse Image Editor. The goal of the new software is simple: to make the interface and user interaction more convenient and enjoyable.

A problem with its name

The development of GIMP began in 1995, more than 20 years ago. This year also saw the release of Quentin Tarantino’s cult film Pulp Fiction. It is precisely from this film that the name of the software is taken, especially from a scene considered shocking and violent. The word gimp is also used in an insulting way in cases of school harassment or to offensively describe a person with a disability. Many complaints have been made to developers without them wanting to change anything, such as the DPReview relay. For Glimpse developers, the new name will be more appropriate for certain environments, such as educational settings.

GIMP interface development stalled

However, the project’s leaders are defending themselves against having the name change as the only point of interest, even if this precise point is the origin of the intention. Indeed, the teams in charge of the GIMP interface have not met since 2012. An eternity in the world of development. The new project must be a breath of fresh air fuelled by new ideas, but also by new financial means. The ambition carried by the developers of Glimpse is simple. The newcomer must be more pleasant, simpler, and more accessible to the user. GIMP is often criticized for doing the opposite.

For Windows or Linux, macOS will follow

Glimpse is, therefore, at the beginning of its history, but with good prospects. It intends to answer the reproach often addressed to the free software world coming from Linux: improve the interaction with the end-user to make the whole less austere. However, this completely new work questions the sustainability of the project. Despite everything, it is also one of the strengths of the free world with the possibility of proposing a new copy that will perhaps surpass its elder.

For more details about the project, the editor’s site is complete with a well-filled FAQ. Glimpse is available now for Windows 7 (at least) as well as for several modern Linux distributions. The development teams indicate that a version for macOS is planned, without giving any delay.

Open source and the parasite syndrome

An open-source project is both a common good and a public good. An ideal dichotomy for the presence of parasites, who want to use the technology without participating in it or attract customers by contributing to the project. However, there are ways to overcome this syndrome.

The specificities of open source projects

Open source communities should encourage software free riding. Because software is a public good, a non-contributing user does not exclude others from using the software. Therefore, it is better to have someone who uses your open source project rather than your competitor’s software. Also, a software parasite makes it more likely that other people will use your open source project (through word of mouth or other). This type of user can, therefore, have positive network effects on a project.

Non-exclusivity and non-rivalry

You might think that open source projects are public goods. Anyone can use open source software (non-exclusive), and someone who uses an open-source project does not prevent someone else from using it (non-rivalry). However, through the prism of companies, these projects are also common goods. Anyone can use open source (non-exclusive) software. Still, when an end-user becomes a customer of company A, it is unlikely that he or she will become a customer of company B (rivalry).

An external agent required

Dozens of academics argue that a foreign agent is needed to solve the parasite problem. The most common approaches are privatization and centralization. The government takes care of a common good when it is centralized, as an external agent. During the privatization of a public good, one or more members of the group receive selective benefits or exclusive rights to that common good in exchange for its continued maintenance. In this case, one or more companies act as external service providers.

Individuals do not seek their common interest

Many researches and books were written on the governance of public and common goods. Many conclude that groups do not self-organize to maintain the common goods on which they depend.

It’s all about control

The “appropriator” who refers to those who use or withdraw from a resource, For example, fishermen, irrigators, farmers, etc. – or companies that try to turn open-source software users into paying customers. It means that the shared resource must be made exclusive (to a certain extent) to encourage members to manage it. As soon as there is an incentive, those who are lessees participate.

Unlike Windows and macOS, Linux is struggling on the OS market!

Linux is the largest community project in the development world. It is used in almost all technological fields (servers, cloud, mobile, supercomputer, etc.). But it’s application can be very confusing on the PC market. Several have tried to explain this by many problems, including the lack of manufacturers offering PCs with Linux pre-installed; support for drivers and proprietary software; user interfaces that people sometimes find very basic; or the problem of ecosystem fragmentation.

Struggles on the desktop OS market

Among the big names in technology which have given their opinion on the issue, we could mention Linus Torvalds for whom, if Linux has difficulty succeeding in the desktop OS market, it is mainly because of the fragmentation of the ecosystem. Mark Shuttleworth, founder and CEO of Canonical (publisher of Ubuntu) spoke of the lack of a futuristic vision. He blames the community, which he says is trying more to do things that look like what already exists, instead of innovating (as he wanted to do with the Unity project); this leads to forks and fragmentations, which in turn will slow down the adoption of Linux on the desktop.

Successful platforms are characterized by different elements that can be easily missed by merely looking at the surface. On the developer side, for example, they have an OS that developers can use to create applications, and they offer an SDK and developer tools integrated into the operating system. There is also a need for documentation for developers, tutorials, etc. so that people can learn to develop for the platform. And once the applications are created, there must be an application store to submit them.

But developers cannot create excellent applications on their own. However, we also need designers. And designers need tools to simulate and prototype applications; user interface templates for things like layout and navigation so that each application doesn’t have to reinvent the wheel, and a graphic design language to visually adapt their application to the rest of the system. Also, it needs HMI guidelines documenting all of the above, tutorials, and other educational resources to help people learn how to design applications for the platform.

Need for a mainstream Linux distribution

On the end-user side, you need a mainstream operating system with an integrated application store, where people can get the applications created by developers. The consumer OS may be the same as the developer OS, but not necessarily (for example, this is not the case for Android or iOS). Users must also have a way to get help or support when they have problems with their system (whether it is a physical store, a helpful website, or other).

You can’t talk about a platform until you meet four essential requirements: an operating system, a developer platform, a design language, and an application store. On this basis, if we look in the world of free software, where are the platforms? The only OS that meets the four conditions in the open world is Elementary OS.

Linux? No, because Linux is a kernel, which can be used to create operating systems around which platforms can be built, as Google did with Android. But a core in itself does not meet the four conditions and is therefore not a platform.

Version 5.1 of the Linux kernel is available, optimizes asynchronous I/O

In the new version of the Linux kernel, version 5.1, there are new features, many improvements, and some bug fixes. One of the improvements is the default Intel Fast Boot activation in the graphics driver for Skylake and more modern processors.

Fast Boot explained

Fast Boot is a BIOS feature that reduces the computer’s boot time. If Fast Boot is enabled, booting from a network, an optical drive and removable devices are disabled, and video and USB devices (keyboard, mouse, drives) are not available until the operating system is loaded. It means that Fast Boot only loads what is necessary, eliminating the jolts of the image in the process.

Still, on the Intel technology side of this version of the kernel, we note the support of HDCP 2.2 and GVT (Graphics Virtualization Technology) from Coffee Lake. Coffee Lake is Intel’s code name for the second 14 nm process node refinement after Broadwell, Skylake and Kabylake. The graphics integrated on Coffee Lake chips support DP 1.2 to HDMI 2.0 and HDCP 2.2 connectivity. Coffee Lake natively supports DDR4-2666 MHz dual-channel memory when used with Xeon, Core i5, i7, and i9 processors, DDR4-2400 MHz dual-channel memory when used with Celeron, Pentium, and Core i3 and LP DDR3-2133 MHz memory when used with mobile processors.

Linux 5.1 kernel

The Linux 5.1 kernel brings some improvements to the support of ARM platforms, including a new DRM graphics driver for Komeda and support for Bitmain SoC (two A53 cores and a RISC-V core). Only the ARM part is completed for the moment, and RISC-V support is partially progressing. For ARM processors, the default 64-bit configuration in the kernel now recognizes up to 256 cores; a decision following the continuous increase in the number of cores in the Socs. The value can be changed.

In other words, the BM1880 Bitmain SoC includes a dual-core ARM Cortex-A53 processor, a single-core RISC-V subsystem and a Tensor processor subsystem. But with the initial state for Linux 5.1, only the A53 cores are enabled for the moment. The BM1880 is marketed as an “on-board TPU” capable of delivering 1TOPS@INT8 performance, a single-core RISC-V processor capable of up to 1 GHz, and optimized for in-depth learning with a power consumption of only 2.5 Watts. Note that the BM1880 is manufactured by Bitmain, a Chinese company that has started to design ASICs for the extraction of Bitcoins with Antminer and other products. The company has also embarked on artificial intelligence and in-depth learning projects.

There are asynchronous I/Os to accelerate operating systems. It allows applications to perform other tasks until a background writing function is available. The kernel is responsible for notifying the application. A kernel developer, Jens Axboe, is now introducing a new variant called io_uring that aims to increase the speed of asynchronous reading and writing tasks and allow them to evolve better. There is also a userspace library that allows developers to familiarize themselves with the main features of io_uring.

PostmarketOS: free and open source, this system aims to keep our smartphones alive for 10 years


Google is stepping up efforts to ensure that Android smartphones enjoy the latest OS and security updates faster and for a longer time. This requires a better structure of the system itself, programs such as Android One and better collaboration with the various manufacturers.

Despite this, it is still not enough. The vast majority of smartphones benefit from software support for only two years, encouraging users to renew their purchases regularly. This is not good for the portfolio or for the environment.

Increase smartphone’s lifetime up to 10 years

It is to counter this phenomenon that the postmarketOS project was created. It has been in existence since at least 2017, but a recent update of the dedicated website has shed light on it and the subject is very much in vogue at the moment on Reddit.

The concept of postmarketOS is quite simple. The goal of its creators is to allow phones to have a lifespan of 10 years and to ensure that only a hardware failure pushes us to part with a device.

Simplified updates for extended tracking

This free and open system is based on the Alpine Linux distribution which has the advantage of being very light (6 MB for a basic installation) to install in addition to being focused on user safety and ease of execution. The particularity of postmarketOS lies in the fact that each smartphone has only one single package that differentiates it from the others. Absolutely all other packages are shared by all devices running on this OS. In concrete terms, this greatly simplifies the update process since there are far fewer specificities to manage.

Fix the cause instead of the symptoms

This is why postmarketOS claims to be different from solutions like LineageOS where teams of volunteer developers allow old smartphones to have the latest Android innovations. “However, such Android-based projects will always be executed in the shadow of Google and the phone industry, correcting only the symptoms, but never the root cause.

Because yes, postmarketOS is not a version of Android and avoids this whole ecosystem. However, the managers do not rule out the possibility of offering some compatibility with Android applications, but leave it to potential volunteers to take care of this tedious work.

As for the interface, it is specified that postmarketOS allows the user to choose the one that suits him most from an existing catalogue.

100 compatible devices

postmarketOS is only at the Alpha version where even calls don’t work yet (which is not very convenient for a phone). The creators of the system boast that they already have more than 100 compatible devices in which the Google Pixel 3 XL can be found. The latter is undoubtedly the most recent reference in this list where we can also see the following models:

  • Asus Zenfone 5 (the one of 2014)
  • Fairphone 2
  • OnePlus One
  • Samsung Galaxy S5
  • Wiko Lenny 3
  • Xiaomi Mi 5
  • Nokia N900
  • Nextbit Robin

The project is interesting to follow anyway and even if things seem to be moving rather slowly, they are certainly moving forward. To learn more about the practical and technical details, do not hesitate to visit the postmarketOS website.


Is Green Website Hosting Really Going To Make A Difference?

green website hosting options

Have you looked into green website hosting options?

Wait, paperless is ‘going green,’ so isn’t the Internet an environmentally-friendly space, to begin with? Well, yes and no, as the Internet leaves its carbon footprint in many different ways. You’ve been made aware, and now you can look into the benefits of green web hosting and what your options are.

The CO2 emissions of data centers are an issue, and the problem is only going to get worse if left alone.

Content has evolved, and there are more websites than ever before. There are web hosting companies out there that are interested in environmentally friendly initiatives. Knowing which of those companies strive to put place a priority on environmental protection is critical.

Now, you might be asking yourself what they could be doing differently to protect the environment. They still need a large data center, and that center needs quite a lot of power. What about a solar energy farm? This is one of the initiatives that some of these web hosting companies are exploring.

There are two other renewable energy sources that data centers can use, too, water and wind.

There are also efforts to reduce greenhouse gases. Web hosting companies can get their VER or carbon offset certificate. Before you read any further, let’s address the fact that you still might not be convinced the Internet is leaving such a large carbon footprint. After all, it’s a virtual space, and it’s not like the rest of the world.

Remember the data centers, however, and that’s why these companies are taking steps to protect the environment.

They know, and they are taking action. Let’s speak in equivalents for a moment. Imagine you had a big plane, and you decided to fly to the moon and back over 5,000 times. That would be the equivalent of the Internet’s carbon footprint annually.

Eco friendly-Web-Hosting There are other ways to describe the impact of the Internet on the environment, too.

It all comes down to those data centers for the most part. But you also have to think about all the electronic items out there as well. Now you can’t be responsible for all of those gadgets that are used to pull up your site. But you can choose a green web hosting company.

That’s a great place to start, wouldn’t you say? When you look at web hosting companies, they are going to be classified into two groups. One of them was mentioned earlier, VER, which means the companies are making an effort to reduce greenhouse gases in the environment. The other category is REC, which stands of renewable energy certificate.

I would say an REC company is best and has made the most significant effort to reduce the impact that the Internet has on the environment. Find more on such web hosts on this website.

Choosing one of the companies that falls into either group would be just another small way that you can make a difference in the world. We all have to do what we can.

eBay will introduce its own open source server designs

eBay has embarked on a large-scale reconfiguration of its architecture: designing custom hardware and a dedicated artificial intelligence engine, decentralizing the data center cluster, evolving to cutting-edge computing, and developing open source technology solutions.

In the process of completion, as the new servers are already operational; their architecture will be made public, in fact becoming open source. Committed for three years to a project to renew its platforms and modernize its backend infrastructures, eBay announces that it will build its own server designs and offer them in open source by the end of 2018.

Launched by Facebook 7 years ago, the Open Compute Project (OCP) is an initiative to share server designs and make them available in open source.

The latter has grown over the years with the support of leading IT companies such as Apple, Microsoft, Google, HPE, Rackspace and Cisco.

However, some heavyweights are missing, such as eBay, which announced last weekend its intention to develop its own servers and share its open source design so that other companies can use them for their needs. If the U.S. online retail giant has not made any announcements regarding its OCP membership by now, it is very likely that it will end up joining in the coming months.

“As part of an ambitious three years of effort to reconfigure and modernize our back-end infrastructure, eBay announces its own custom servers designed by eBay for eBay. We plan to make eBay servers publicly available through open source in the fourth quarter of this year,” the company said in a post. “The reconfiguration of our core infrastructure included the design of our own hardware and IA engine, the decentralization of our data center cluster, the adoption of a next-generation IT architecture and the use of the latest open source technologies.

Leveraging IA on a Large Scale

Among the technological bricks used by eBay are Kubernetes, Envoy, MongoDB, Docker, and Apache Kafka.

The infrastructure developed by the e-merchant allows it to process 300,000 million daily requests for a data volume of more than 500 petabytes.

“With the transformation, we’ve achieved and the large data flowing through eBay, we’ve used open source to create an internal AI engine that is highly shared among all of our teams and aims to increase productivity, collaboration, and training. It allows our data scientists and engineers to experiment, create products and customer experiences, and leverage IA on a large scale,” eBay said.

What Is Open Source?

Open Source Content

Open source is one of the greatest inventions since sliced bread. We can safely say that it has changed the way we make websites and apps. Thanks to open source code, creating an online presence has become way cheaper than it used to be a while ago when the internet was in its infancy.

Open source is nothing else but code that is free for everybody to access, modify and use as they see fit. WordPress, Drupal, and Joomla! are only three examples of projects that are based on open source code. This is something new. Before the open source project was created, websites and internet applications didn’t offer free access to their code. Everything was encrypted, so website owners had to pay their coder to make changes whenever needed. Besides, even if you had access to the original code, you weren’t allowed to use it for your projects, as it belonged to its creator. Replacing your web developer was a huge problem, as most of them used to write their code, difficult to understand by another coder. Besides, they all encrypted their work before their websites or apps went public so that nobody would steal their code.

Open source code is entirely different.

a galaxy of open source appsYou can reverse engineer projects based on it, and then take whatever code sequences you want and use them to create something new. There are no limits when it comes to tweaking and adjusting the code to suit your needs. You can find open source projects online on GitHub or various blogs, as well as in discussion forums and Facebook groups on IT and coding topics. Everything is accessible and easy to use, hence making the life of web developers so much easier. Furthermore, many people have specialized in developing add-ons and plugins for the most popular open source apps. All these make it very easy for anyone who wants a professional website to build one without too much coding knowledge. Without open source, all these people would have needed to pay expensive developers to build and update their websites.

Strong communities

The most significant advantage of open source projects is that they are developed and maintained by teams of experienced coders. This means that the code is always up to date with the latest technologies and with the latest security features. At the same time, open source projects are also the most exposed to hackers and other cyber criminals out there, as they also have access to the code, just like everyone else. Keeping open source apps secure at all times is one of the most significant challenges for programmers from all over the world.

This is open source in a nutshell. You can easily see that it has made the web a more user-friendly environment. Even beginners can learn how to use this code to create beautiful apps with advanced functionality and professional appearance. Our modern world is more inclined to sharing knowledge and information than ever before. This is good for all of us, coders and consumers.

Microsoft Is Planning To Acquire GitHub For $7.5 Billion.

microsoft buys github

Microsoft is on target to acquire a coding platform that has become very popular with software coders and developers around the world. The tech giant is in the process of buying GitHub for a reported 7.5 billion dollars. At last check, GitHub was currently valued at almost $2 billion.

Once combined, the two companies will help to empower developers to be able to achieve more of their goals at each stage during the development process, bring the developer services and tools of Microsoft to an entirely new audience and speed up enterprise use of the coding platform.

The Purchase Agreement

Microsoft has a long-standing of being a company that focuses on developers first. By deciding to join forces with a coding platform such as GitHub, the tech giant is planning to strengthen its commitment to providing developers freedom, innovation, and openness.

Microsoft is well aware of the responsibility it is undertaking for having community responsibility under the agreement, and the company promises to empower all developers to innovate and build some of the most pressing challenges in the world.

Under the agreement terms, the purchase of GitHub for $7.5 billion will be completed via Microsoft stock. The purchase is also subject to a completion of a regulatory review and customary closing conditions. If everything goes as planned, the acquisition is expected to be completed by the end of the year.

Upcoming Changes For GitHub?

Also under the agreement, the coding platform will also retain a developer first community for developers and will continue to operate independently. By retaining this independence, GitHub will also be able to provide an open source platform for developers in any industry.

This means that developers will still be able to use programming languages, operating systems and tools of their choice for all of their projects. These developers will also be able to still deploy their code for any operating system, device or cloud.

Global Digital Transformation

In today’s global economy, there are more software companies now than ever before. This places software developers at the forefront of the digital transformation that has been taking place since the dawn of the 21st century.

These companies and developers are driving business functions and processes across departments and organizations. This covers areas from HR (Human Resources) to IT to customer service and marketing. The choices that developers make will have an impact on growth and value creation in every industry.

GitHub has become the home for today’s developers, and it is considered to be the top global destination for software innovation and open source projects. The coding platform currently hosts an extensive network of software developers in almost every country in the world. These developers represent over 1 million companies in industries including:

  • Healthcare
  • Technology
  • Retail
  • Manufacturing
  • Financial Services

Microsoft highly expects that the financials of GitHub will be reported as part of the segment known as the Intelligent Cloud. The acquisition will be accrued to the 2020 fiscal year operating income. This will be done on a non-GAAP basis.

What Are The Best Linux Distributions Available To You?


The world of operating systems has been practically dominated by Microsoft Windows for several consecutive decades now, although Apple software is also out there on associated pieces of technology. Some growth and innovation in the netbook and laptop markets also see new players like Chrome operating systems from Google, but for the most part, Apple and Microsoft rule the scene.

Despite all this, Linux has hung around, catering to a select base of users. Some individuals prefer it at an enthusiast level as either a complement or even a replacement for corporate software, and some companies like using it because the very nature of Linux distributions means they can be had freely.

Whatever your reason for being curious, you might be in a position where you are wondering what the best Linux distributions are at the point in time you are in. It’s not a question quickly answered, as one single distribution rarely proves best for all uses and cases. In fact, what you intend to use a Linux distribution for will often determine just which one is going to prove the most optimal choice for you.

The first thing you should establish is your minimum system specifications on the computer or device you intend to run a Linux distribution on. Most of the time, such distributions will need fewer resources than another operating system, which is something many Linux users love, so you’re probably safe. Still, you don’t want to get a distribution you can’t run. In fact, you should verify you can run it well.

Secondly, consider if you are going to have it share a machine or have a computer all to itself as a secondary computer. Some Linux distributions coexist with other operating systems better than others.

Third, ask yourself what your intentions are? If you’re looking for an alternative operating system because you’re tired of the instability and cumbersome controls often associated with Microsoft Windows, then looking for a stable beginner system should be your goal for the best fit. On the other hand, if you’re looking for something to support a gaming rig, you want something that uses far fewer resources than Windows, so your games have more dedicated power, yet, you also want options for specific controls over components and possibly even overclocking for your CPU and graphics card.

One final decision you should make is whether you want to buy a retail package or download the freeware kernel of a particular distribution. A retail package might be more convenient and easy to install and use, and might even come with some support. Then again, you are paying for something that could be free for you.

It’s not a bad idea to ask around or look online. PC sites are always updating their lists of the best Linux distributions available to reflect the current state of affairs, and any Linux enthusiasts you know are probably going to be more than happy to discuss things with you since they can show off their knowledge and expertise.

Typo3 is available in version 9.2.0


Version 9.2 of the open source content manager focuses on-site management and aims to “boost publishers’ productivity, push developers’ creativity and make integrators’ lives easier.”

Site Handling

The most remarkable new feature of TYPO3 version 9.2 is the site management feature. Introduced in version 9.1, the “Site Management” module in the TYPO3 administration space now contains a new “Configuration” sub-module. It allows integrators and site administrators to add and modify a global configuration for one or more sites.

Each site configuration has a unique identifier and configuration values such as root page ID, entry point, language definitions, and so on. The configuration files are stored in a YAML file under “typo3conf/sites/site-identifier/”. It is therefore easy to maintain configuration in a version control system such as Git for example.

The site management functionality already supports configurations such as domains, languages, error handling. According to the development team, this feature will be extended to long-term support version v9 later this year.

Debugging and profiling

typo 3 softwareThe TYPO3 Control Panel now provides a more in-depth overview of TYPO3’s internal processes at runtime. Once enabled, TYPO3 integrators and site administrators can access performance and cache statistics and settings for a specific page. They can also simulate certain front-end access situations. It is possible, for example, to endorse the identity of a specific user group or to simulate a time stamp.

Concerning the administration panel, it will receive a significant revision to conform to the highest standards in future versions. To prepare for this development, it has been moved from the kernel to a dedicated system extension. This step also lays the foundation for other improvements, such as a new modern design and new features such as adding better profiling capabilities and the ability to add custom features via an API.

Changes to anticipate the future.

Although TYPO3 is not new to the open source CMS market, its core code is continually being reworked to adopt contemporary technologies and modern software paradigms. In particular, TYPO3 aims to support PSR-15 middleware ready for use by adopting the eponymous standard. For the development team, this approach will improve interoperability with independent libraries. As one of the first enterprise content management systems on the market, TYPO3 version 9.2 introduces PSR-15 middleware in the frontend, as well as in the backend.

TYPO3 v9 long term support version is scheduled for November 2018. This version will try to avoid constants and global variables if possible. To achieve this, a new “Environment” class has been developed, which acts as a central repository to store commonly used properties throughout the kernel. This class also contains methods relevant for all types of PHP, CLI and Web queries.

Security in Typo3

In the continuous security improvement process of the content manager, the path to the “var/” directory can now be configured as a TYPO3_PATH_APP environment variable. The Apache Web server can use the following configuration directive. This directory usually contains Install Tool session files, caching framework files, lock or log files, Extension Manager data files. Even though a correctly configured web server and a TYPO3 instance prevent access to all sensitive files in the “var/” directory, it is evident that they are non-public files. The development team can now locate these files outside the web root.

Getting TYPO3

TYPO3 can be installed in different ways. For example the traditional way by using the source package on typo3.org or the modern way by configuring a project using compose. More details via get.typo3.org/version/9

Gimp 2.10 is available

The leader of open source image editing software receives a significant and much-anticipated update. The GEGL image editor, in particular, brings the most significant benefit to the adoption of this new version.

For GIMP users, it took patience to receive a significant update of the software. Six years of development, nothing less, were necessary to propose all the new features of version 2.10.

The results are nevertheless up to the expectations: GIMP finally supports the RAW format via the free software Raw Therapee or Darktable. The most important innovation is the new image processing engine, GEGL, in high definition. This non-destructive processing engine offers, among other qualities, a multithreaded approach and hardware acceleration. Over 80 GEGL-based filters are already available.

Other new features of GIMP 2.10 are more visible: interface, more modern visual presentation, extensions via plugins. The software now supports OpenEXR, RGBE, WebP, HGT formats and improves compatibility with Photoshop PSD format on import. Color management becomes a fundamental feature of GIMP: most windows and preview areas offer color management. The preview for all filters is compatible with GEGL. Finally, metadata viewing and editing are available for Exif, XMP, IPTC, and DICOM formats.

GIMP is not yet a 100% Photoshop replacement tool for purists, but for most image editing and processing operations, it no longer has much to envy.

A growing demand for open source talents


The annual report on employment in the open source sector released by the Linux and Dice Foundation is available. This report shows that opportunities are growing for qualified open source professionals.

The survey was conducted among more than 750 hiring managers and 6500 Open Source professionals. The summary of the conclusions of this report is very positive and shows some significant changes since the 2017 report:

Hiring open source talent is a priority for 83% of recruiters, up from 76% in 2017.

Linux is back among the most popular open source skill categories, making it knowledge required for most entry-level open source careers.
Containers are rapidly gaining in popularity and importance, with 57% of hiring managers seeking this expertise, up from only 27% last year.

There is a gap between the views of hiring managers and information technology professionals on the effectiveness of efforts to improve diversity in the industry.

Hiring managers move away from hiring external consultants and choose to train existing employees on new open source technologies and help them obtain certifications.

A still very tight recruitment market

While 55% of open source professionals surveyed say it is easy for them to find a job and 87% believe that mastering open source has boosted their careers, the situation is just as tricky for recruiters. 87% of recruiters report difficulties in recruiting.

To keep the most exciting profiles and attract talent, several strategies are put in place. Among these, training and particularly certification have become essential weapons, and it can be observed that companies implementing such plans have doubled since 2016, reaching almost half of respondents (42%). Developers say that training is their first difficulty (49%) in the open source sector before the lack of documentation (41%).

Salaries remain the primary motivation element for recruitment with 30%, but open source professionals also declare for 19% that their primary motivation lies in the originality of the projects and for 14% the possibility of balancing their professional and personal lives. Besides, 10% of them consider flexible working hours and teleworking as the main reasons for their recruitment decision.

The most sought-after skills in the open source market

Only upheaval in the 2018 ranking of skills sought: Linux. He had not gone far, but mastery of the operating system came back in force with 80% of recruiters looking for these skills. With 44% of recruiters looking for profiles that master containerization technologies, the growing trend observed over the last two years is confirmed and places these technologies among the most fashionable in technology companies. For the rest of the podium, we find the cloud, security, web technologies, networks.

Suse will continue its open strategy following purchase


A pioneer of the open source era, SUSE, the first company to provide open source services to companies, is acquired for 2.535 billion dollars by the Swedish private equity group EQT Partners. This acquisition comes shortly after SUSE Linux Enterprise 15 is available in beta.

Largest operation in SUSE history

With 1400 employees worldwide, SUSE achieved sales of nearly $35 million in the last twelve months of 2017. The amount of the sale is 26.7 times the adjusted operating income of the SUSE software unit for the 12 months ended October 2017.

Since its creation by German students, SUSE (Software- und System-Entwicklung) has been bought several times, notably by the American software company Novell in 2003 at 120 million dollars in 2003, with the aim of a competitive strategy with Microsoft’s operating systems. Without success, Novell itself was bought by Attachmate Group for 2.2 billion. In 2014 Attachmate merged with the British software company Micro Focus for 1.2 billion dollars. The acquisition by EQT Partners, therefore, represents the most significant transaction in the company’s history.

SUSE to focus on infrastructure

SUSE appears to be pleased with the new partnership with its new owner EQT Partners and is also committed to focusing on its expansion into the IT infrastructure field.

“This is exciting news for all of us at SUSE and marks the next step on our path of growth and momentum. The investment and support provided by EQT will enable us to continue to drive our strategy. »

What about open source?

In the announcement on the company’s blog, SUSE wants to reassure about its commitment to the open source world and the continuity of development projects:

“SUSE intends to continue its commitment to open source business and development model and actively participate in communities and projects aimed at bringing open source innovation to the high-quality enterprise. Reliable and usable solutions. Our genuinely open source model, where open refers to the freedom of choice offered to customers and not just the code used in our solutions, is integrated into the SUSE culture, differentiates us in the marketplace and has been the key to our years of success.

The company also confirms the continuation of the current management team: “The current management team led by SUSE CEO, Nils Brauckmann, will remain and continue to focus on the success of customers and partners with a deep commitment and commitment to communities and open source projects.