Tuesday, March 28, 2006

ExecuteXmlReader woes

These days I get problems which by no means are "funny" (I brand a problem as funny which is quite challenging and still fun to work on). So the problem is this: given a one-to-many relationship between two tables in Sql Server, generate a one-time report in excel which linearizes (I don't know whether it's the correct term) the entire data in one row for a given main entity. Confused? so was I when I got this requirement, let me try to explain the problem (often times knowing the problem itself means you're half way there!) by giving an e.g. consider two tables Orders and OrderLines, Order table contains the data about the order (let's say the customer name, date entered etc.) and the OrderDetails contains data about the line items in a single order (let's say the product information abt the data that was ordered etc.). Here's a snippet of a highly denormalized (it works for the e.g. that I'm abt to give) design of these two tables in db:
Order:
OrderId int pk,
CustomerName varchar(100)

OrderLines:
OrderId int not null fk_refers_to_Order
ProductName varchar(100).
Here's a sample set of data:
Order Table:

OrderIdCustomerName
1Tada
2Wada

Order Details Table:


OrderIdProductName
1Prod1
1Prod2
2Prod1
2Prod3

The generated report should have output in this format:


OrderIdProduct 1Product 2
1Prod1Prod2
2Prod1Prod3

So how do you solve it, well here's what I did...loaded the data using for xml explicit so that the OrderLines data is rolled up and added to Order node as children. Wrote an Xsl to iterate through the nodes and print them in a csv format and loaded the csv in xls. Different story that I faced quite a few issues with this approach! Firstly, for xml explicity returns the resultset as a fragment (i.e. there is no root node), during ASP/VB days I could add a dummy root node quite easily in the xml template but I couldn't any easy way of achieving it in ADO.Net! The only approach that I found was on serverside.net which added the dummy root node in the for xml query itself. Once I got the XmlTextReader, I loaded it into a dataset with XmlReadMode.InferSchema and used dataset.WriteXml to write the entire xml to a file (I had to follow this approach just to get the entire xml from "for xml" query, the query analyzer always truncates the results if it exceeds certain characters). With the xml file in hand I wrote some quick n dirty (this operation was one time only and hence there was no need to automate the entire process) xsl in cooktop which would save the generate a csv output.
The relevant snippets of code and xsl are below:

XmlTextReader r = (XmlTextReader)cmd.ExecuteXmlReader();
myDataSet1.ReadXml(r,XmlReadMode.InferSchema);
myDataSet1.WriteXml("data1.xml");

Update: Had to remove the xsl listing as it was randomly blewing up my feed! Will put it back on once I figure it out.

Friday, March 24, 2006

Google maps prototype live again

Some time back my free hosted website was auto-deleted by somee.com due to "cpu overuse", now given that I don't have many high-end background procceses running on my site; it was quite a surprise but as they say you get what you pay for! Nevertheless, I've got the site back up and running and reuploading the entire chunk took me some time to figure out as I would get real bizarre configuration errors on the live site.
The Fx page transition e.g. is also back where it originally was and so is the Google Maps prototype, by the way I'd updated the prototype long time back to integrate it with MapPoint, Flickr and Fotolia...and yes, please don't use way too much bandwidth while checking them out :)

Wednesday, March 22, 2006

My tryst with Linux


It's been nearly 6 months that I installed Linux on my laptop, and since off-late I've started booting into it more regularly I think it's a time to post my experiences around Linux (primarily Ubuntu; that's what I've installed).
  • Prep: I used PartitionMagic to create partitions as I've had bad experience with using Linux based partition apps, even though those were the days when Linux had just started way back in 1999 it's always better to be careful: as they say; once bitten twice shy. I resized my ntfs partition & created a physical Ex3 partition and that's where I sc***ed up! I didn't make any FAT partition making it impossible to create a local read/write share between the two OSes (XP and Linux). Moral of the story: Measure twice, cut err partition once!
  • Installation: Went like a charm, almost everything was auto-detected including my WIFI card, as a Windows user you might think what's the big deal, but believe me getting all of the piece of hardware detected and auto-configured is really a big deal in Linux (ask my friends who still are having problems with their wireless card being detected by Fedora and Red Hat). Another thing that I did was to install Grub (a boot loader) onto MBR.
  • First Impression: Ubuntu is gnome based rather than KDE based like it's sibling Kubuntu, hence I didn't get that Windows kinda looks but I've to admit that apart from gnome-panels which look very very dated, I like the look-n-feel of gnome.
  • Applications: Ubuntu comes pre-installed with most of the stuff that you would need on day to day basis like Firefox as web browser, OpenOffice as office suite, Gaim (IM client), xpdf for pdf viewing, gimp for image manipulation etc. so you are almost set once you log in.
  • eXtras: The main reason to install Linux distro was to try out Mono, so I installed Mono and MonoDevelop (IDE for mono) and created a small ASP.Net web page in MonoDevelop. Mono comes with a lightweight web browser (much like Cassini) called xsp and voila the web page ran without any hitches, the only thing is MonoDevelop is still under development (well, so is mono) so you don't get support for code-behind pages yet.



monodevelop

HelloWorld in MonoDevelop


xsp and mono

Xsp and helloworld.aspx


  • Verdict: All in all a cool "geeky" OS but still not user-friendly enough for Average Joe to really give a scare to Windows.

Saturday, March 18, 2006

Url Checker with HttpWebRequest

Recently one of my colleagues came with a problem, where he wanted to check the validity of a url submitted by the user for some reporting tool that he was writing. Since I've had some experience with HttpWebRequest, I asked him to use it to get the response and then check the StatusCode returned, everything was hunky-dory till people started entering Urls for huge pdfs and doc files et al, since the UrlChecker would make a Http-Get call to the Url, it meant getting the entire document from the Url which was putting some unnecessary load on our server. Well, if you just need to check for the validity of a url (i.e. if it's alive or not) you can use Http-Head method to only return the header information. This is how I did it:


    private bool IsValid(string url)

    {

        HttpWebRequest wr = (HttpWebRequest)WebRequest.Create(url);

        wr.Method = "HEAD";

        HttpWebResponse ws = null;

        try

        {

            ws = (HttpWebResponse)wr.GetResponse();

            switch (ws.StatusCode)

            {

                case HttpStatusCode.OK:

                case HttpStatusCode.Accepted:

                    return true;

                    break;

            }

        }

        catch (HttpException e)

        {

            if (ws != null)

            {

                switch (ws.StatusCode)

                {

                    case HttpStatusCode.Forbidden:

                    case HttpStatusCode.NotFound:

                        return false;

                }

            }

        }

        finally

        {

            if (ws != null)

                ws.Close();

        }

        //dummy return

        return false;

    }

Friday, March 10, 2006

Cooking is not for men

Today while chopping onions for the dinner I chopped my thumb along with it, this is the second time in this week when I have tested the sharpness of knife on my fingers and hence I have come to the conclusion that cooking is indeed not for men, atleast not for me. But given the fact that I don't have anyone else to cook food for me, guess I'll have to keep on trying the knife on my fingers for more time to come in the future.

Wednesday, March 08, 2006

Unstrung words

There are things in life that happen by themselves and then there are things in life that make us strive hard, bend and at times look like a fool. In my experience the things that don't matter to us are the things that happen by themselves but the things that we care about make us go out and achieve them, why is that so? It's because the fears, the doubts and the negative feelings that come along with the things that we care for, because they matter to us we desparately want them to happen: the more you care, the more you fall. And then there are people who appear from nowhere when you're down: good friends, people who are always there to give you free advices, people whom you care about and who care about you....so do you follow your own instincts even if all and sundry think otherwise or do you listen to them and do what they say? well, it's your life and not theirs..you have to know what you want/need and think/act accordingly...I don't need somebody else to screw my life (by following their advices), no thanks..I can pretty much do it myself and be better at it too!
And yes today is the "Women's Day" (not that I care) so wishes to all the women born today and more so to the woman who for me, defines the real meaning of being a woman..happy birthday!

Saturday, March 04, 2006

Google, Yahoo and MSN Search

If you're using Firefox and have HTML Validator extension installed have you noticed that the MSN search results page renders without any error or warnings, whereas Yahoo! is rendered with Warnings (were like 126 for the query that I tried) and Google's got errors as well as warnings? Not that it matters much cause the pages are rendered without any glitches but I feel MSN Search team deserves a pat on their back for generating an error and warning free results page (which as a techie myself, I know is not the easiest to achieve).

bad times are good times...

"bad times are good times to prepare for the better times!"
One of my IM contacts had this one liner as her status message, I absolutely fell in love with it...well off-late, I've been having way too much prep time to my comfort!

Wednesday, March 01, 2006

Web 2.0 Paradigm

Almost every one is jumping on the a.k.a. "The Return of the .coms" bandwangon, basically web 2.0 is all about social networking and thick web clients using some or the other Ajaxian techniques. Given the previous trackrecords of .coms (2000ish, when it went bust), one really has to think the business model behind this all, I was recently reading pretty insightful Russel Beatties' post around the same (aptly titled WTF 2.0) where he raises the same questions which I've been having since long...what's the business model of all these web 2.0 .coms which have mushroomed out of nowhere all of a sudden?

First things first, let's see how web 2.0 differs from web 1.0 (portals was the buzzword then) sites, I see two major differences:
  1. The internet penetration: the majority of people now have faster internet connections like cable or dsl (think abt dialups then), so an Average Joe spends more of his time online.
  2. Social networking: almost all the portals had very little or close to nothing end-user involvement, all the end-user was able to do was read through myriad of information and hence the end-user retention rate used to be pretty low and it wasn't attractive enough for the new user to create an account. Think web 2.0, be it del.icio.us or digg it's the end-user(s) that control the content: I like a news and want others to read, I digg it, liked a website? bookmark it at del.icio.us. Now it's all about sharing your(i.e. the end-user) knowledge, interests etc with the world, the current web apps just aid in orchestrating the information.
  3. Mindset shift: Another differentiating factor is that the Average Joe is more comfortable now in managing a good chunk of his life and time online, for e.g. earlier a lotsa people used to be sceptical abt buying anything online but that number has come down drastically (think iTunes etc.).


Now to the main point, yes web 2.0 have a better user-retention model than web 1.0 portals but where's the cash inflow? Given that the domains and shared hosting is dirt cheap nowadays almost anyone from a college going kid to a homely mother of 5 can start a web 2.0 out of his/her pocket, no problems here, the problem comes when you get popular as Russell points out in his post. You've more requests & users than your shared host can handle, so now either you shut your shop or you invest some chunk of your money into getting a better infrastructure for site. So now we do have a pretty high TCO (good cash outflow without any inflow), so let's try to think of the ways of generating revenue, I can only think of 3 ways that you can generate revenue online and be profitable (not taking into a/c conning a VC to invest), listed below in order of profitability:
  1. An Ad-Free and really free website: See point 4. Examples: 43things, technorati, del.icio.us etc.
  2. Contextual Ads: Sign up for Google Ads or Yahoo! Ads or whatever and hope Average Joe's gonna be enticed enough to click on the ads enough number of times for you to make some good moolah (this might work if you're still on a shared hosting and the investment that you've made is pretty low). Examples: Digg uses this model by the way, this can't be their only revenue model, given that Alexa rates it among the top 500 websites they gotta have more than 1000....(fill in the remaining 0s) users which can't be supported on a shared hosting.
  3. Sell something: Needn't be a physical product like amazon or ebay sells, could be some advanced features or goodies of your website. Example(s): flickr.
  4. Wait for a bigger fish to swallow you: Given that some bigger fishes like Yahoo! are buying almost everything under the .com sun, build something niche (actually needn't be that niche either), get a good user base to boast off and start approaching bigger fishes whom you think might be interested in your thingie. There are few caveats with this though: first define the exit criteria pretty clearly, what if no fish takes the bait? then either be prepared to wait and shell out more from your pocket or shut your shop quietly (online users generally have a short memory span, they'd forget you and move on). Generally, the bigger fish would be more interested in buying "a company" which can exist on it's own than buying just the idea (ie your website & perhaps you), so point 4. is indeed your exit strategy then you might indeed have to look at conning VCs ;-). Think off the bigger fishes which might be interested in your kinda app upfront and define the technology platform based on that for e.g. it would be tough to sell a Java/Apache/PHP/Linux based web 2.0 app to Microsoft and equally tough to sell ASP/.Net/SQL Server based website to Yahoo! or Google. Examples: flickr, webjay, upcoming.org, orkut etc etc etc.

The straw that broke the camel's back...

Ever since 2002 or so I've been having some pain or the other in my back, earlier it used to affect my lower back but since past one year it has happily shifted to my upper back (I don't how that works but the lower back is pretty much alright these days but the pain in the upper back has increased multi-folds to a point where sometimes it's just way too much to handle). I think the pain can be attributed to wrong (and long) sitting postures, esp. now that I work primarily on a laptop which means your head is not in line with the screen and you have to bend down to view the screen properly...that's why I feel it's imperative that you should use an external monitor and/or keyboard with the laptop to make sure the alignment of the head & the screen is pretty much in the same straight line and hence avoiding the bending down business all together. Recently, a friend of mine underwent a keyhole surgery on his back for perhaps the same issue which has got me to think seriously about visiting a doctor too. Given how much I "love" visiting doctors and taking pills, it's quite obvious why I had been avoiding it for so long, I am hoping that just like the pain shifted from lower to upper portion, one fine day it would just shift further up to a point which doesn't affect my body any longer.