My tribute to Dinnes Ritchie went wrong!


Yesterday. … It’s the day that precedes today. It’s also the day Ubuntu released Ubuntu 11.10, the day I knew about the death of Dennis Ritchie, and the day I wrote a C program as a tribute to Dennis’ death and made me and my PC suffer!

Well, I was at college most of the day – with no internet connectivity; it remained that way till I returned home late at night. Like always, I check emails and RSS. Q8Geeks.org had a new post: Dennis Ritchie RIP.

I was not expecting his death, … it was a shock; I actually was sad. Being tired, I thought I should sleep. However, right before I got off my chair, I decided to write a C program as a tribute to Dennis’ death; I was inspired by GH0S1_R33P0R’s code in the blog-post.

A basic idea was an infinite loop that prints “A legend has passed away!”. However, it soon developed into a program that that creates files containing “R.D. is a legendary man who’ll be thanked for the next googol years.” as long as it’s running. Files were named “RD_” followed by number of iteration (counter) as hex with the extension “.RIP”. I didn’t need to worry about having a messy directory since I was in /tmp. Once I ran it, the CPU usage jumped to 100% and system load was increasing by the second. I though Dennis deserves more than whatever this is doing, so I decided to leave it running till I wake up.

Though I noticed that I used an integer counter for iteration and file names, which means once reached it’s maximum value (2,147,483,647), it will change sign and will continue and rewrite the written files eventually. So I changed it to long long int which is about 9,223,372,036,854,775,807 on my system. However, we all know this didn’t fix the problem, it only postponed the effect. I was tired and was longing for a nice comfy sleep so I decided to go with postponing. Just to make it a bit more extended, I changed it to unsigned long long int which extended it upto 18,446,744,073,709,551,615. I finally compiled it and executed it, then went to bed.

Seeing that I slept very late, I woke up around 10am. I wanted to get some homework done on my PC. First thing I saw when I got was at my desk is the command I issued yesterday, so I tried to stop the program by <ctrl>+C, but it didn’t work. I went for the kill; killall -9 a.out. It was put out of its misery.

Wait was it? I noticed that the whole system was slow even after killing the process. I thought it was just the aftermath so I checked my email and RSS. When I was done, I launched the nautilus file manager to start working on my homework. Well, this is when the nightmare started; the system became as slow as a Pentium II running KDE! I had no idea why that happened, but I was sure that it is related to what I’ve done. So I decided to clean up the mess I made in /tmp.

I moved to a tty, turned off my desktop environment, then navigated to /tmp. It was time to see what had happened while I was asleep. ls. Seconds later, I was pressing <ctrl>+C like a maniac, I knew what this meant. The list is too large to handle, it was going to put it on the RAM then print it on screen which would take a lot of time. One thing I was sure of is that the list was big, but I wasn’t sure how big. Since I knew redirecting output into /dev/null always makes the ends faster (if the process outputs data), I decided to try ls > /dev/null. I had to press <ctrl>+C after a couple of seconds passed. I was then certain then that it was a huge list!

At that moment, I no longer wanted to clear the mess, I wanted to know how many files my program has written! It was time to think of the fastest way for getting the number of files in a directory. The first that came to my mind was checking nautilus file properties, but that was not an option since nautilus was no longer usable! I thought of running ls -1 | cat -n | tail but that would be the slowest option ever. A thought was getting /tmp size in bytes and divide it by a single generated file size in bytes which should give me an approximate number of files. Although that idea was theoretically applicable, it failed practically. du had the same problem ls had, it was taking a lot of time that I had to terminated it. I had only one last option, a python script.

import os

dirList=os.listdir("/tmp")
print dirList.__len__()

unbelievably, in about 2 to 3 seconds it gave me the output!

4,469,292

Quite interesting, I thought! Actually this is the time when I decided to write a blog-post about this.

Now that my curiosity has been satisfied, I had to get rid of the mess! Problem is, what is the fastest way to do it? I needed to get rid of these stuff since I need to do my homework! I tried rm /tmp/*, I tried rm -r /tmp, and I tried my own python script. One thing that made me feel better is that I was sure that my script was working since mine actually obtains the list first and then deletes one by one. I have no clue how rm, ls, and du work since they gave me nothing but a long wait. Thing is, my script was not removing files as quick as I needed; in about 10 minutes, it deleted 1,000 files, or maybe 10,000 file, but either cases, it is slow when you have MILLIONS OF FILES!

It then simply hit me: Linux, it discards /tmp once it’s off! # reboot. One moment later, my mind was screaming “NOOOO!! My sourcecode!”. I wrote the C program in /tmp, so having /tmp deleted also means having my sourcecode deleted. The system rebooted and I was not able to do a thing. My Pentium 4 was faster than me.

Debian was starting up. Then I saw something that gave me a piece of information, hope, and a headache. It was deleting /tmp!

The piece of information that it gave is that /tmp is deleted on start-up. The hope it gave is that my sourcecode was not discarded when it was first shutdown. The headache it gave was a result of anxiety and over-thinking; the line states that /tmp was being deleted. Although I wanted it to be deleted and writing a similar sourcecode is a piece of cake, I wanted the original file. I thought of interrupting the process and turn the PC off to save my file, but it might have been deleted since I’m not sure on what order it deletes the files, alphabetical, size, location on HDD, … etc! But I had to think fast cause IT WAS DELETING ALL FILES! I turned off the PC.

It hurts, shutting down a booting system forcefully; it felt as if something inside of me died. Painful.

Anyways, I got my ExternalNut (my 320 GB external HDD) and hooked it up to my PC and booted CrunchBang. I was in, in no time. I mounted the internal HDD, but then paused for a moment. Going to /tmp would result in the same problem and would probably freeze the system, but then I knew how to deal with it: using one of the many command line features, copying files without listing or even being in a directory.

cp /media/disk/tmp/dr.c ~/

No errors were printed, I checked my home directory and IT WAS THERE! Phew, the sourcecode was saved.

#include

int main(void)
	{
	FILE *RIP;
	char fileName[20];
	unsigned long long int fileCntr=0;

	while (RIP=fopen(fileName, "w"))
		{
		sprintf(fileName, "DR_%p.RIP", fileCntr);
		fprintf(RIP, "R.D. is a legendary man who'll be thanked for the next googol years. ");
		fclose(RIP);
		fileCntr++;
		}
	return 0;
	}

I restarted my PC, booted into Debian and let it do its deletion. It took time, 10 minutes passed and nothing changed on the screen. Since it was creating those files all night, I thought it might take a while to delete them, so I left my room.

One hour later, I came back but nothing changed! Out of frustration, I rebooted it. Unlike what I expected, it did not show the deletion statement and instead it booted into Debian. Mission accomplished? That’s what I thought until I checked /tmp.

It was still populated. I had no option but to delete them myself. That’s when Sigtermer came in the story. I leaked the most basic information of the funny problem I was facing. I wanted to get his thoughts on this problem, but also didn’t want to ruin this blog-post on him. He asked one question that made me remember something which explained a lot: “Did the files exceed the limit?” That question reminded me that in there’s a limit for directory contents in Linux. Well, it’s not really limited, but if one exceeds that limit, utilities would probably fail. 4 million has definitely exceeded the limit!

So now we knew that utilities that deals with directories as a whole would fail, so we needed to avoid that. It actually made sense for all the things that happened: redirecting output to /dev/null but it never finishes, ls never finishes, and removing /tmp as boot time was going to take forever. Sig and I started brainstorming. Sig had a very nice solution, writing a script that moves a number(<limit) of files to a new directory, then deletes the directory. And since it contents wouldn’t exceed the limit, it should have no problem doing so. However, I came up with another solution that requires less effort: using find. Although it’s probably slower, I decided to go with it.

cd /tmp
find . -type f -print0 | xargs rm

I killed that process quickly since it wasn’t doing anything; I can’t afford extra slowness! And instead I went for the -exec option which executes the commands that’s given for every file once found – though I hate its syntax.

find . -type f -print0 -exec ‘rm {}’ \;

And it was showing removal result! I was W00Ting, but then realized that the removal output was actually errors saying that the the found file cannot be removed because it doesn’t exist! WTF?!

Seeing that the first command did not err, I decided to reuse it after a bit of alteration.

find /tmp -type f | xargs rm -v

IT WORKED! It was actually deleting files!


That was the time when I started writing this blog-post, and I had it running till now; from 1:47pm till 8:26pm And at this moment, I am curious to know how much it’s deleted thus far.

Okay, I just killed the find process. Running my script and the number of remaining files is … 2,255,170!! What? In 6 hours it just deleted half of them! Come on!!

Sigh, … I guess that’s what happen when you leave your PC generating files till you wake up. Anyways, I’ll leave my PC cleaning up my mess all night long.

So, … why did all of this happen? It’s because I thought it would be okay, nice, and cool to populate my HDD with such files. Well, I’m just grateful that my processor is not a core i7 and my HDD’s bus isn’t SATA.

About these ads

~ by AnxiousNut on October 14, 2011.

2 Responses to “My tribute to Dinnes Ritchie went wrong!”

  1. that went so wrong , but it was fun reading it XD

    and I think you meant “sense” in “It actually made since for all the things that happened”

    • that went so wrong , but it was fun reading it XD

      Glad you liked the stupidity! … But you’d probably like it more if it happened to you! You can still do it! :P

      and I think you meant “sense” in “It actually made since for all the things that happened”

      Yes, this and the other mistake you pointed out on twitter were correct, but not the unsigned long long int one! :P … I guess that’s what happen when I don’t proof read a post (I needed to waste less time to work on my homework – which I haven’t started doing yet).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

%d bloggers like this: