srijeda, 24. listopada 2012.

A common SSD problem - or - How I Stopped Giving A Fuck

Reactant

So, while I was reading on the forums about the hot new SSDs, I have created an opinion on my own. Mostly from what I've seen, people are experiencing problems similar to first world problem image here:
Problem officer?
These people obviously have issues.

From a user's perspective, adding a SSD will improve random access times and data transfer speeds to a very noticeable point. But it being the most noticeable benefit from adding the solid state drive, it still doesn't even NEARLY saturate a single SATA 2 (3 Gbps) port, let alone a SATA 3 (6 Gbps) port.

However, as far as sequential read and write speeds go... well yes, it is a good thing to have a more modern SATA standard to connect your SSD to. But calling it "slow" is not fair for something that makes your computer do a multiple-generation jump from when you have bought it without SSD to when it has been upgraded with the SSD.

All in all, it's a good benefit and exactly there where it matters most.

! BONUS ! BONUS ! BONUS !

So I created a meme of my own. Check this out:

Somehow Picard doesn't give a fuck for your problems.
I have an aftermarket SSD in my corporate Macbook Pro connected to a SATA 3 port and I don't give a fuck about your problems.

ponedjeljak, 22. listopada 2012.

Lossless compression with modern multi-core computers

Preface

If you recently bought a computer or a modern smartphone, you have probably heard of a certain kind of technobabble called "Dual Core", "Hyperthreading", "Quad-Core", "Simultaneous Multi Threading", "Multi CPU". Basically because all modern computers (laptops, tablets, desktop PCs, smartphones) do come with more then one CPU. I assume that The Kind Reader knows what a CPU is, so I don't have to explain a lot here.

A Multi-Core World

A smartphone, A FUCKING PHONE has more than one CPU lately. So what is up with that? Marketing? Featureset? Multimedia? All of it actually, so you can have and use more features on your phone that wasn't possible before because of limited CPU and computational power. So you can do more with your phone, and not just make calls or send messages. 3D games perhaps? Why not :)

The method to build multi-core CPUs is very complicated, so you need to plan the architecture, create the die, manufacture the die in a semiconductor foundry, test every die, package it and depending on test results, brand them with a model name and a suggested price.

An Intel Ivy Bridge CPU die. Just look at those numbers !!!!
If you have a modern computer (laptop, desktop, tablet...) chances are you have more than one CPU core in your CPU - my corporate laptop has 4 cores. So it's only understandable that I try to figure out how to give all of them a nice amount of computational load. It's only fair since they're at my disposal :)

You can create applications and do calculations in parallel on each core, so the time needed to complete some calculation shrinks depending on how parallel the task is (see Amdahl's Law here) and how many cores/CPUs you have. So you can accelerate stuff like encoding videos, compressing music (every music file on each core OR one file on many cores), mathematical calculations, multitasking... the list increases with time.

But it's difficult to create programs for parallelism, because not every task can be made parallel.

For example, you cannot parallel the drawing of the image on the screen, but you can parallel the compression of an image (create a checkered field of image sections and process them in parallel).

You can also use all cores for data compression. And that's what we're cover here.

And a small tutorial, finally

There are numerous ways to do data compression in parallel. So,
  • On Windows-based systems, you can use the application called 7-Zip to do data compression and decompression in Windows. You don't need to setup anything, just install 7-Zip and compress some files! It will distribute the calculation workload across all CPU cores
  • On Linux-based computers, things are a little more complicated. You first must compress files to a package, and then apply a compression mechanism. The default tools use only one CPU core by default and additional hacking must be done for parallel data compression.
  • On Mac OS X based systems, I don't care. Not because they're fast and you don't need to do squat, but because it thinks it's smarter than you and it wants you do stuff their way. I want to do stuff my way, so from religious and lifestyle reasons, I don't give a fuck how OS X compresses files in parallel.

Compress & Decompress files in Linux

You need the tar archiver and a compress/decompress application. For the sake of this blog post, I will pick the gzip compress/decompress program.

So you have a folder called foo. So,

[root@atomsk ~]# tar cpvaf foo foo.tar.gz

That command will compress the folder/file foo to the file called foo.tar.gz. Tar means Tape ARchiver, because magnetic tape storage was used and is still being used to create backup archives of sensitive files for... whatever happens :). 

OK, so tar is a tape archiver program, but what does "cpvaf" means

"cpvaf" are the argument switches that order the program what to do. You will either read the manual somewhere else OR continue reading :)
  • "c" - means that the tar program will (c)reate a new .tar file
  • "p" - means that the file access permissions will be (p)reserved. If the file is read-only and belongs to the user called linuxuser, after decompressing the file will return as read-only and belonging to the user linuxuser.
  • "v" - will just display which file it's packaging. It's short for (v)erbose.
  • "a" - will tell tar that it (a)utomatically figures out the compression method based on the result file extension. In our case, that's .gz which is gzip.
  • "f" - means that the output will select a (f)ile or (f)older for archiving or unarchiving. In our case, the "f" argument packages a folder called foo.
But sadly, the compression will only use one CPU core for compression. You can also avoid compression and just package the files (avoid the .gz extension on the target file), so computational power is not really needed, but just IO performance of the computer doing the packaging.

MULTICORE COMPRESSION BABY!!!!

You need an application that can do file compression in multiple threads. The OS will then distribute the threads across multiple cores and will greatly improve the compression performance!!!

For gzip, there's a program called pigz, which stands for parallel implementation of gzip - and it's not pronounced like "pigs" but like "pig-zee" or "pig-zed" if you're British or want to pronounce stuff the right way :).

But I'm afraid the things become a little more complicated to use because you need to know a little more about the UNIX shell and a concept calld piping.

[root@atomsk ~]# tar cpvaf foo - | pigz --fast > foo.tar.gz

A pipe in shell scripting means that the result of the program will not be saved on a file or displayed on the screen (standard output), but the process result will be saved to memory. That will give you the option to redirect the output of one application to the input of another application using the pipe operator "|". 

And... let's GO!!!!
In our example, the output of our tar program will be piped to the input of the multithreaded implementation of gzip that will compress the tar file in parallel and save it to foo.tar.gz. The "--fast" switch means that it will use the fastest algorithm that results in the least compression.

And to decompress....

Also we need to use piping. So we need first to decompress the file and then give the result to tar to unpack the archive. In a nutshell, if we need to decompress we need to reverse the action order compared to compression. Since we compressed last, we need to decompress first and then unpack the archive since we first packed the folder.

[root@atomsk ~]# pigz -d < foo.tar.gz | tar xvf -

And that's it. The "-d" switch mens that pigz will (d)ecompress the target file foo.tar.gz and will pipe the output to the input of tar program with switches x - e(x)tract, v - (v)erbose and f - (f)ile as a input argument. The "-" at the end means that the file will be grabbed from the pipe.

And the end

Since you bought many cores, it's only reasonable to use them at their 100% where available. The speed of compression now depends solely on the CPU performance since hard drives, and especially SSD are more than IO capable to feed the modern CPU with data to compress.

BONUS QUESTION

Can someone explain how to use multithreaded compression on Mac OS X? When I created disk images with Disk Utility, it did compress with all threads. So it's possible BUT HOW?