Saturday, 30 May 2009

When and how to compile your own kernel

I've recently had a few debates containing pros and cons about compiling a Linux kernel, and I've decided to write a few thoughts about it.

First of all, when should you recompile your kernel? My humble opinion is - only when it is essential to you. This means, if you have some piece of hardware which requires recompiling of your kernel (not just adding a module or two), or a newer version that's available on your repositories, or if there is a feature in the kernel which you really need, and can't or don't want to switch to a different Linux distribution which has it out-of-the-box.

I'd like to say that you should, before anything else, pull the latest updates from your distro's repository. Sometimes there is a newer kernel in the repo which will make everything magically work. If it doesn't, download the kernel sources, configure them and compile.

I won't go into details about basics of compiling a kernel. This is actually pretty straightforward nowadays - you untar the archive, make menuconfig, make, make modules_install, mkinitrd or mkinitcpio -k -g , copy the initrd and kernel image to the /boot directory, edit the menu.lst in GRUB and you're ready to go. Ask google for details - the web is literally bursting with this stuff.

There is an important issue I'd like to talk about. Many people claim significant performance gains when they recompile their kernels. However, I've never seen any benchmarks on this. You will mostly gain some less boot time (if any), and the compiling process alone will be shorted (again, if any). However, if your distro's kernel is compiled well, you needn't worry about that. Nowadays openSUSE, Fedora and Slackware have their kernels compiled very well. Others are so-so, but they mostly all work.
So, if you're looking to compile your kernel in order to gain performance, you should give up right away. You mostly won't notice it.

On the other hand, there are (were) various patchsets which improve(d) performance of the overall system, depending on what you expect of it. Such were -viper (R.I.P.) or -ck (later -rt, but that one wasn't even a shadow of it's predcessor), and eventually most of the code got into the mainstream kernel.

So what gives?

If you like to experiment, you can "borrow" a kernel source from a different distro. I used the openSUSE kernel on my Arch Linux system for a long time, and will probably use it again, once it starts supporting ext4 filesystem. I'm just too lazy ATM to reinstall everything, after formatting back to ext3. OTOH, the Arch stock kernel works. Apart from openSUSE, good choices may be Fedora or Gentoo. All of their patchsets are interesting to play with. I repeat however, this is if you like to experiment. Do not count on something that will magically make your system faster, more stable and more secure.
One of the more interesting patchsets as well is the Zen patchset (I've mentioned it before) which can be obtained from www.zen-sources.org. It is claimed to be "stable and featureful", which is very true, and can give you a lot to tinker around with, as well as support for a lot of new hardware.

Ok, so how to compile?
Easy. Just untar the sources, make menuconfig, make and make install. :)
Seriously, there are things you need to look out for.

You want stability and small size of the kernel image? Look out for the stuff that's labelled as "EXPERIMENTAL" or "DEPRECATED". Disable it all. Unless you're absolutely positive (not just sure) that you need it.
Apart from that, you can also remove things that you don't need or use, such as the device drivers for the devices you don't have. But I hope you all know that by now (if from nowhere else, then from the articles some of you googled earlier).

How about moduling everything you can? The answer is simple: don't do it. The kernel image may be smaller, but it doesn't mean it will be faster. On the contrary, to be honest with you. You will lose performance if you module everything like crazy. Sure, you can compile the device drivers and things that you don't use all the time as modules, but that's it. Your main kernel features should stay inside the kernel.
This is something that was much debated. My answer to all this was - look at the Fedora and openSUSE kernels (both compiled by teams of engineers - and believe me, these guys know what they're doing). Their images weigh more than 2.5MB. You can easily cut it down to 1.5MB and, with a little effort, below 1MB. What's the point?

What if I'm not sure? Simply leave it at the default value. Or try moduling it, and see if it's already loaded in the minimal system environment. If it is, consider building it in the kernel.

To conclude, there's always a million choices - just like with every Linux thing you do. Just watch what you're doing, and read stuff.

Oh yeah, and backup. You never know when you're going to sKrew up your own best-kernel-in-the-world, and then the stock kernel might come quite handy :)

Tuesday, 18 November 2008

The latest Zen sources, kernel 2.6.28-rc4-zen

Since the zen-sources are still down :( I've seen a number of users who weren't able to grab the latest sources for this popular patchset. Well, the rc4 isn't the latest bleeding-edge kernel anymore, but still, I don't know whether there is a zen kernel for the rc5. lucke, a good soul at the Arch Linux forums has posted a Rapidshare link to download the sources, since he managed to grab them before the main zen-sources' server had its mobo fried.
Frankly, I'd like to know which kernel this server was running :D

So, without further ado, here's the link. Enjoy!

Tuesday, 28 October 2008

Installing Fallout 2 under WINE in Linux

Fallout 2 is the best CRPG ever, at least in my book. Yes, I know Fallout 3 is coming soon, but somehow I'm afraid of it, after I've seen the butchering that Bethesda Softworks has done to The Elder Scrolls, turning a very good RPG (Morrowind) into a mediocre FPS (Oblivion).

Now, recently I've come upon killap's Fallout 2 Restoration Project, which doesn't add any unofficial content, but unlocks the original content which was unavailable due to coding bugs. Killap, I bow to you here and now.

I've also noticed there's a number of people trying to install this under WINE, with not much success, or with very dirty solutions, such as using the native ddraw.dll.

My solution was, in the end, quite simple and everything works beautifully. Here's how I did it:

1. Install WINE (if you already haven't), with your favourtite package manager. Mine is pacman, since I use Arch Linux.
  • #pacman -Sy wine
2. If you like, you can install winetricks (which will allow you to easily install DirectX, but I don't believe this is needed for Fallout 2; it may be for some other games, though). Again, with your favourite package manager.
  • #pacman -Sy winetricks
3. Put your Fallout 2 install CD in your drive, or mount the ISO (if you don't like juggling with CDs and make ISO images prior to install), set the WINE mountpoint to it via winecfg. You should also set WINE to the "Windows 98" Windows version. We will need this for the Restoration Project to install properly, so that WINE sees things the right way.
Now run the install:
  • # wine x:\setup.exe
(where X is your WINE mountpoint to the CD/ISO)

4. After installing Fallout 2, no patches except the RP are needed; The RP includes all the patches you need. Make sure once more your Windows version in winecfg is set to 98, and run:
  • # wine F2_Restoration_Project_1.2.exe
And follow the installation procedure until the end. After you do this, you may set your windows version in winecfg back to Windows 98.

5. Run the game. Voila! It works. And pretty good, too. I can't emphasize this enough, killap has done a great job, and is still working on it. Yes, the 1.3 version with even more bugfixes is out soon, so if you like it, give the man some credit.

Happy gaming! :)