Sunday, October 18, 2009

Codes Codes !!

So much for the title. Well how many times i code a certain algorithm i always lost it in the mess of my laptops Hard drive, so finally thinking of organizing all my work and liking the codes in the blog :)
Here are few of them. Needless to say no matter how compacted i code there is always a scope of optimizing it, i agree.

1. Merge sort , Sorting problem time complexity O(NlogN)
2. Binary Search , using recursion
3. Binary Search , using iterative method
4. grade school multiplication , (how ever the code can be implemented using D & C this is the basic idea of it
5. Binary Search Tree , (well, if you check out the output at the bottom of the page you can find a seg fault (I wonder) The code runs perfectly fine on my machine.
6. Red Black Tree , This one took a lot of time for me to code.. huh!? It has in it the implementation of insert operation on red black trees. Doesn't include the delete()
7. Longest Common Subsequence, an exhaustive version using backtracking - non DP code
8. The whole bunch of Link list operations
9. Depth First search, the raw code, source, introduction to algorithms, useful in determining # of paths b/w given pair of vertices* and detecting cycles in a graph* (topological sorting)
I'd update the list after coding more..:)

Friday, October 9, 2009

Some things that i should have learnt way before...

There was nothing much to keep me occupied in the past month to post about it. Creating Makefile was one of the thing that came across after idling for long time.
So here is the way you do it, how ever the complete tutorial about it could be found at
http://www.gnu.org/software/make/manual/make.html

Here are a few simple sample files for make

/**
*main.c
*/

#include "stdio.h"
int main()
{
printf("main");
two();
return 0;
}

/**
*two.c
*/

#include "stdio.h"
#include "header.h"

int two(){
printf("two %d", N);
return 0;
}


/**
*header.h
*/

#define N 100


#Makefile

objects = main.o\
two.o
edit : $(objects)
gcc -o edit $(objects)

two.o : header.h
.PHONY : clean

clean :
rm edit $(objects)

#end Makefile

how ever a better insight could be obtained by glancing through the URL though ;)

Friday, August 21, 2009

Semester Project on Gesture Recognition

There is not much of to post about this in this blog because its all together a group project and had to be maintained by all the group members, Here is the link to it

http://gesture-recognition.blogspot.com

Tuesday, July 28, 2009

Back to college..New Campus

And so the summer passed away, I have things left to be done still ExIF project was merely touched got lame reasons as excuses but never Mind Hoping to hop back on to its back and start riding with jet speed ^_^. As of now i am happy as a child cuz I am browsing the fastest ever Net I have experienced. New Semester, New Courses, New Campus lets see what new I would learn this seem.. sounds egotistic but never mind.

So talking about Exif, I would carry on that work with mozilla. I was so damn excited when started my work with it and I would be continuing on the same. This semester I have got loads of work on hand to do. The Sem Project Regarding the Effective Human Computer Interaction with the help of gestures, will post that after getting started.

Thursday, June 4, 2009

Kernel/Ubuntu------nVidia

Well, seriously i thought i was doomed and was about to install a fresh copy of jaunty on to my Laptop. I was proud before that i had owned a laptop with a nVidia graphic card, but in vain to realize that the linux distros as far as i know Suse ( the worst ever experience ) and Ubuntu ( better than the Bitter ) did not really get along well with the nVidia third party drivers. Jotting down with how i dealt with the situation in the two distros, the first one i had to download the entire kernel source which was like around 200 MB and was like hell to get that much with the current bandwidth at college nonethless i managed to obtain that and the linux/nVidia Drivers from the nVidia site, i had to re-compile the entire kernel source again inorder to get things working on my laptop when i was working with suse, It was a completely different story with ubuntu where apt and synaptic swiftly managed things for me and installing restricted drivers was not really a problem. All i got to do was a few clicks. But lately i have updated my packages to just lose some time invain and suffer with the nVidia Driver thing, I have got my kernel upgraded from 2.6.28.8 to 2.6.28.11 previously i got the source of the .8 kernel but i dont know what happened after the upgrade i lost my source to the .8 and obtained the source for the .11 its then an idea struck my mind to update my kernel every thing went smooth..ha ha i must not let my pride take over me :P kidding

Well in the event of updating i had also downloaded kubuntu desktop environment, its glossy looks as always takes over on any user.. and so let me try it and switch back if not satisfied :)

Saturday, May 30, 2009

Its Summer...09

Done with the finals of the semester, It was from past few months i have been desperately trying to get in to the open source community, i have explored a lot many in due course, ofcourse the path wasn't pretty much fun. Lots of good people out there willing to help others, critics as usual are normal ups and downs finally it settles down with writing my first tiny miny patch for the bug in the firefox. /me gets a sense of accomplishment !!

Looking to work upon some thing this summer, a project that can earn me experience was satisfactory to me, my quest for the same had begun long back :)

Saturday, April 4, 2009

Most of the use-ful information is for free.

Firstly let me clarify that this post is not for a high end geek, considering the need for the data there are lot many ways to get them, One of the ways to achieve the ends to means is the same old google. You might be booing that you have already know this from your kindergarten but what you might not have known so far is how to effectively use the site.

Starting with the way the google works, as far as i know, when you hit some text into the search bar and hit the return key you are displayed with almost precise result that you are looking for and now how google does it is by ranking the webpages. The web-pages are ranked depending on how many other web-pages link to this site, when ever some other webpage includes its link in to its own webpage the rank of the linked web-page increases, and you might be wondering on google gets to find a webpage some where remotely located. This job is accomplished by a google-bot called the spider bot (also known as crawlers) crawl in to a site when ever it finds a link to that site and from the crawled site to another site in this way the bot covers the entire internet web pages which are approximately 220 million by now.
So consider your self lucky when ever you have the google webpage in your browswer consider you have access to all those 220 million pages in a single click (thanks to google).

That is as far as the history lesson is considered and now coming to practicality and applications how can data efficiently be mined out. The answer to the question actually depends on what sort of data you are looking for, Ok for time being lets categorize them broadly into Education and Entertainment, Lets talk about entertainment first :) so how do you download music, you locate the site and go to the site, go through all the fuss and then comes the download page (This might even be asking you to register and all sort of things). The simplest way open the google webpage type in the text as shown below

intitle:"index.of"(mp3) linkin.park
Take a look...

What you get is a whole directory access to the songs by linkinpark explaining in detail about the format and how it works, the " index.of " serves as the actual key which helps in searching and listing the directories as they are, now all you have to do is to search your song from the list and download lolz, easy aint it? The format in the braces (mp3) can be replaced with any extension you want to find the data about and also if you want to include multiple extensions it goes like (mp3|swf|wav) etc etc.

That is by far most efficient method i use, people out there if you find any feel free to share it.
Google search allows you to use symbols like +,- inorder to eliminate or include the terms in search eg goes here
suppose you want to search the pages which are .php and ignore all the pages startng with .asp or .html or any other format
All you got to do is to type some simple text where ' abc ' is the content you wish to search for,

abc -html -asp +php
Take a look...

Next comes the ;) education ( No offense ), Now suppose you are interested in all the academic stuffs etc etc, all you have to do is the prefix your search with this

site:.edu <content you want to search>

what you get is all the sites from the educational institues and the best thing search the assignment your proffessor gives you , who knows you might find it. The sites you might want to search may vary, how about trying some like .org (the content mostly in this sort of sites would be free), .net.

Next is the filetypes you want to search for, lets say that you needed a white paper publishd by so and so author you would definitly find it in IEEE why not try some other means to get that with out logging in or some thing like that, do this..

filetype:pdf <content you want to search>

And again the .pdf can be replaced with .doc or .ppt (most of the slides which your lecturer teaches from are here give it a try).

And lastly try applying the combinations of the above methods to find even more useful methods like and a .pdf or a .ppt from the site:.edu

site:.edu filetype:ppt <content to search>

That would all be for now in this post and i would really be glad if any one points out any mistakes or issues in this (if you can) :P.
see you again with some other information.

Introduction

The name of the site says it all, the reason and the sole purpose of the site is to Increase, Share, Spread the knowledge that is accumulated by the real life experiences all my life and remember the site includes mainly the ideas of the tech-world, though i myself am not a tech-guru, I would try and contribute as much as to the world for a noble cause.

That is all for the formal introduction of what this site is about, I would try and accumulate as much information , useful information that is, here in a regular basis so viewers are welcomed to post comments and advice on improvements or any thing as for now.