Showing posts with label general. Show all posts
Showing posts with label general. Show all posts

Friday, February 07, 2020

Adventures in Code Spelunking

image

It started innocently enough. I had an Azure DevOps Test Plan that I wanted to associate some automation to. I’d wager that there are only a handful of people on the planet who’d be interested by this, and I’m one of them, but the online walk-throughs from Microsoft’s online documentation seemed compatible with my setup – so why not? So, with some time in my Saturday afternoon and some horrible weather outside, I decided to try it out. And after going through all the motions, my first attempt failed spectacularly with no meaningful errors.

I re-read the documentation, verified my setup and it failed a dozen more times. Google and StackOverflow yielded no helpful suggestions. None.

It’s the sort of problem that would drive most developers crazy. We’ve grown accustomed to having all the answers a simple search away. Surely others have already had this problem and solved it. But when the oracle of all human knowledge comes back with a fat goose egg you start to worry that we’ve all become a group of truly lazy developers that can only find ready-made code snippets from StackOverflow.

When you are faced with this challenge, don’t give up. Don’t throw up your hands and walk away. Surely there’s an answer, and if there isn’t, you can make one. I want to walk you through my process.

Read the logs

If the devil is in the details, surely he’ll be found in the log file. You’ve probably already scanned the logs for obvious errors, it’s okay to go back and look again. If it seems the log file is gibberish at first glance, it often is. But sometimes the log contains some gems that give clues as to what’s missing. Maybe the log warns that a default value is missing, maybe you’ll discover a typo in a parameter.

Read the logs, again

Amp up the verbosity on the logs if possible and try again. Often developers use the verbose logging to diagnose problems that happen in the field, so maybe the hidden detail in the verbose log may reveal further gems.

Now’s a good moment for some developer insight. Are these log messages helpful? Would someone reading the logs from your program be as delighted or frustrated with the quality of these output messages?

Keep an eye out for references to class names or methods that appear in the log or stack traces. These could lead to further clues or give you a starting point for the next stage.

Find the source

Microsoft is the largest contributor to open-source projects on Github than anyone else, so it makes sense that they bought them. Just watching the culture shift within Microsoft in the last decade has been astounding and now it seems that almost all of their properties have their source code freely available for public viewing. Some sleuthing may be required to find the right repository. Sometimes it’s as easy as Googling “<name-of-class> github” or following the link on a nuget or maven repository.

But once you’ve found the source, you enter a world of magic. Best case scenario, you immediately find the control logic in the code that relates to your problem. Worse case scenario, you learn more about this component than anyone you know. Maybe you’ll discover they parse inputs as case sensitive strings, or some conditional logic requires the presence of a parameter you’re not using.

Within Github, your secret weapon is the ability to search within the repository, as you can find the implementation and usages in a single search. Recent changes within Github’s web-interface allows you to navigate through the code by clicking on class and method names – support is limited to specific programming languages but I’ll be in heaven when this capability expands. The point is to find a place to start and keep digging. It’ll seem weird not being able to set a breakpoint and simply run the app, but the ability to mentally trace through the code is invaluable. Practice makes perfect.

If you’re lucky, the output from the log file will help guide you. Go back and read it again.

As another developer insight – this code might be beautiful or make you want to vomit. Exposure to other approaches can validate and grow your opinions on what makes good software. I encourage all developers to read as much code that isn’t theirs.

After spending some time looking at the source, check out their issues list. You might discover your problem is known by a different name that is only familiar to those that wrote it. Alternative suitable workarounds might appear from other problems.

Roadblocks are just obstacles you haven’t overcome

If you hit a roadblock, it helps to step back and think of other ways of looking at the problem. What alternative approaches could you explore? And above all else, never start from a position where you assume everything on your end is correct. Years ago when I worked part-time at the local computer repair shop, I learnt the hard way that the easiest and most blatantly obvious step, checking to see if it was plugged in, was the most important step to not skip. When you keep an open-mind, you will never run out of options.

As evidenced by the tweet above, the error message I was experiencing was something that had no corresponding source-code online and all of my problems were baked into a black-box that only exists on the build server when the build runs. When the build runs… on the build server. When the build runs on the build agent… that I can install on my machine. Within minutes of installing a local build agent, I had the mysterious black-box gift wrapped on my machine.

No source code? No problem. JetBrain’s dotPeek is a free utility that allows you to decompile and review any .net executable.

Just dig until you hit the next obstacle. Step back, reflect. Dig differently. As I sit in a coffee shop looking out at the harsh cold of our Canadian winter, I reflect that we have it so easy compared to the original pioneers who forged their path here. That’s who you are, a pioneer cutting a path that no one has tread before. It isn’t easy, but the payoff is worth it.

Happy coding.

Saturday, January 21, 2017

This time on a mac

There’s been less activity than normal around here. Looking back, I missed last year completely. That’s more than a blip, that’s a complete year. Not sure what happened, I think it was a perfect storm...

Those that know me know that I started practicing Karate in 2015 and my facebook feed has been filled with belt exams and karate videos. I’m highly motivated to obtain my first degree in under three years, so three times a week in evening classes and weekends makes it difficult to find a few extra cycles to experiment and blog. On the plus side, I’m probably the most fit I’ve ever been, I've dropped nearly twenty pounds, toned up and dangerous.

The other major shift for me was professionally work wasn’t selling the projects that suited my skills. I kinda had a crisis of faith of sorts, where I felt I had to choose a new religion. I switched into managing more people, bought a mac and learned Swift, and explored leveraging my .net and WPF strengths in Xamarin. Work would eventually pick up later in the year, but in true agency fashion it involved dusting off my Unity3D skills, learning how to rock a HoloLens and learn Python and Ruby while I was at it. There’s loads to blog about it.

I want to touch on the macbook reference above. I feel like I’ve made a career out of bashing apple. (I ended many arguments in the 90s with apple fanboys with my favourite argument killer: “...but it’s a mac…”) Buying a mac for me was huge slice of humble pie. I still use a PC at work, but my mac is now my playground and go to device when I want to do something that isn’t “work”. I’m writing this on a mac, trying out blogo today.

One last thing, to accompany my macbook I switched to an iPhone and parted ways with my Windows Phone. I’m no stranger to iOS as I’ve had an iPad and iPhone in the past, but I was on Windows Phone for years with three different devices. I loved the platform, and still do, but I got tired of waiting for apps that the rest of the planet had. Apply Pay and wallet are awesome. I may still buy a lumina to play with UWP with Xamarin.

So what should you expect this year around here? Well, hopefully I can make good on promises to blog more, but expect to see a combination of development and agile delivery practices. You’ll likely see some Xamarin, some UI automation, maybe a HoloLens. From an agile delivery perspective, expect some tips and tricks to bring to your team as well as a focus on DevOps and Test Automation.

Monday, February 20, 2012

My first month with Kanban

Just before the Christmas break, I took over an existing project that was approaching a major milestone. It's a great project: a small team, working with a high profile client to build a modular MS Surface application that ships a new release every few months.

You would think that taking over a project in the late stages of development would be a bad thing, however it's been fairly positive experience. Although the project suffered in the beginning as our bandwidth was reduced while I was ramping up, the opportunity to introduce a fresh set of eyes with a different perspective has allowed the team to innovate and challenge their assumptions (I’d like to think it’s been for the better).

One initiative that I felt that the project needed was some form of Agile methodology.  Although we had a set of high level stories that we knew had to be implemented, there was no breakdown of the tasks or ability to gauge progress. Despite this obvious need, I knew Agile was going to be a tough sell: the project has a highly dynamic, fast pace environment where requirements and tasks change daily. Adopting a full-on Agile agile process seemed too formal. I might be able to establish a backlog, maybe some task estimates -- but trying to establish planned iterations with client involvement? Not a chance. Besides, I'd likely wind up losing a good chunk of my day managing the status of tasks and justifying burn down charts to management.

This was a great opportunity to try something new. I had heard about Kanban before, but didn't understand it enough to put it into practice. I found this great eBook (free) that explains Kanban by outlining the differences between Scrum and Kanban. I highly, highly recommend. Download it now.

The following is an account of how we put Kanban in place…

Kanban in a nutshell

Kanban is a very lean process methodology that is based on a subset of agile/scrum. Whereas Scrum comes with rules about how to work with iterations and stories, Kanban comes with nearly no rules at all. In fact, you often have to add your own rules to Kanban in order to align it to your project/organization.

At a high level, Kanban is all about visualizing the work as a workflow on a "Kanban board". User stories are often depicted using sticky-notes, and they travel through the workflow on the board. Although you could easily do the same thing with Scrum, the key difference is that a scrum board would be reset at the end of the iteration. Kanban doesn't have the concept of iterations, so the board is the project.

Visualizing our Workflow

Before we could dive into realizing our project in a sticky-note form, we collaborated a bit to determine what our development workflow would look like. Although our process utilizes graphic designers from time to time, we decided to limit the scope of our Kanban board to the development pipeline only.

As mentioned previously, our process for this project isn’t very well defined. The majority of our work is based on a SharePoint list that our client has access to, and we log defects and work items there. Each day, we pull things from the list and work on them until we reach a deadline for a release or have completed all tasks to the best of our ability. It’s not the best system, but it’s just enough to work. The team decided that the best place to start with our Kanban workflow was in the middle where we were most involved, so we’d have two columns to represent the development tasks:

  • In progress: Items that the team is actively working on.
  • Done: Items that the team has completed development and are ready for testing.

Of course, our process doesn’t end once development is complete. From here, we package up a release and deploy it to a local Surface device in the office where other members of the team can see our progress. The team decided that we should do this more frequently and with some consistency, so we added another column for this:

  • Surface: Items that have been deployed to a local surface machine for verification. If we identify problems with any of these features, they go back to the beginning of our process.

Eventually, once all the stories and tasks have been completed, we bundle up our changes and deploy it to our client’s surface machine for their testing and feedback. The process repeats until the client accepts the current package and it is released to their Surface machines. We represent these steps with another two columns:

  • Client: Items that have been deployed to the clients’ location. It’s rare that stories are rejected by the client, but there’s often a lot of feedback that gets added to the beginning of our pipeline.
  • Live: Items that have shipped.

At this point in the exercise, we’ve captured how things are currently executed on this project. The astute will recognize that the beginning of the workflow is lacking definition – I’ve not outlined how we account for tasks that aren’t actively being worked on, or how effort is prioritized. I’ve deliberately left this out, partly because I feel the pipeline has two distinct parts (an incoming and outgoing flow) but mainly because it’s a good way to explain one of Kanban’s core features: work-in-progress limits.

Keeping the team focused by establishing Limits

Clearly, our workflow will have a column that represents a backlog for our remaining effort. Although I could add it to the beginning of our workflow, which would make our Kanban board look a lot like a standard Scrum burn-down chart, there isn’t anything that would prevent us from falling into our old habits. In a highly dynamic project like this one, there’s a constant barrage of client requests and other noise that is beset with a sense of immediate urgency – most tasks get assigned to a developer and go straight to an “In Progress” state as they arrive. Essentially, if there aren’t any rules about when things are allowed to be considered In Progress, then there aren’t any reasonable expectations for the schedule of that work. This creates a real problem with managing client expectations and keeping track of what tasks people are actually working on. While there’s something to be said about multi-tasking, a team without focus gets nothing done.

To resolve this issue, we take advantage of one of Kanban’s unique features. At the top of our In Progress column, we add a number that represents the maximum limit for items in that column. This is known as a work-in-progress (WIP) limit. Since developers only have one head and two hands, we assume that developers should only work on one task at a time. This is easy math for a team of two developers.

By limiting our In Progress items to 2 we put a spotlight on the items that the team is actively working on, but this also creates a visibility problem about which items will be worked on next. To help visualize the priority of items, we add another column:

  • Accepted: These are the work items that the developers will work on after they complete the In Progress tasks. We refer to them being as “Accepted” because the development team has reviewed the tasks and has committed to delivering these features.

To prevent the Accepted column from getting bloated, we give it a WIP limit of 4. This ensures that the developers have just enough prioritized items in their queue to keep them busy for the day. We could go higher, but 4 is a very manageable number.

Adding Rules and Events

As mentioned previously, Kanban doesn’t have many rules and you need to add your own to suit your needs. Here are a few rules that we’ve established:

  • Our project manager is not able to manage any of the columns except the Accepted and Backlog columns. This means he plays a key role in helping to direct the work towards client expectations. Indirectly this also means that he’s not able to interfere with work in progress.
  • When the Done column has enough items (we’re still trying to figure out a proper WIP for Done) we halt development and deploy what we have to our local Surface machine. We do this to keep the amount of regression testing to a minimum.
  • Before we deploy to the client environment, we must validate the work items against the local surface machine. This ensures that we use the same installation media, and use the items in our Surface column as a list of regression tests. We are actively looking to automate our deployment process so that we can deploy more frequently.

Results so Far

There are some really interesting side-effects from implementing our Kanban board:

Kanban-board

  • The most notable impact is that our stand-ups have changed considerably. Rather than spending 15-20 amnesiac minutes trying to remind each other what each of us did 16 hours ago, we spend a few minutes reviewing the current items on the board and prioritizing next steps. We then talk about the issues. It’s a night-and-day difference from previous projects.
  • Having the board highly visible on our wall simplifies communication and makes managing the tasks much easier. We get immediate feedback when a developer finishes a task because they have to get up from their desk and physically move their item to another column. And since we have an Accepted column that contains prioritized ready-to-go items, there’s never any guess work or wasted allocation trying to figure out what developers should do next.
  • The visual representation of the board makes it easy to gauge the health of the pipeline. If the Accepted column is full at the beginning of the day, I’ll likely get a few extra cycles to focus on development tasks. If the Accepted column is low, I can quickly scan the backlog and determine if we’re going to have any issues.
  • There’s some interesting metrics that come out of tracking the status of the board daily, something I’ll likely write more about in another post. But in short, I count up the number of items in each column at the beginning of the day and add a small journal note that summarizes the changes in the board.  From this I can easily track our last deployment and where the backlog changed.

Challenges

There are a few minor challenges we’ve encountered:

  • From a project management perspective, it’s difficult to gauge remaining effort in hours. Although we’ve applied a sizing rank to our tasks (Small, Medium, Large – where large is 8 hours), it’s difficult to glance at the board and determine if we’ll make a target date. The Kanban eBook suggests that all post-it notes on the board should have the same weight which would solve this problem, but we are still struggling how to group and organize our tasks to fit within this concept.
  • The board only works when we’re all in the office. I’ve adjusted the columns in our SharePoint list to align closely to our board, and there is some duplication of effort trying to keep the board in sync. The system that I’ve found to work best is to ensure that all tasks are born within SharePoint, and they’re considered Pending until there’s a post-it note on our Kanban board. Periodically throughout the week, I spend a few minutes updating our SharePoint list to reflect the board’s current status. The SharePoint list acts as a digital memory that can provide additional details and attachments, but the board is the most accurate representation. There are still some discrepancies between the two systems, and we either need some formal process or tweaks to make this work better.

Conclusion

For the last month, the team has been putting the final touches on a release which has involved completing some features and fixing defects. This type of work changes daily and our Kanban board has allowed us to visualize the outstanding work in a meaningful way. We’ve started having scheduled retrospectives to gather feedback and adjust our process. Generally, that feedback has been positive and we’ve course corrected a few times for the better. The true test will come shortly when we start working on the next release – will the Kanban format work or should we adopt formal sprints?

As a final word, I feel it’s important to mention the feedback that I’ve received from other teams. Some glance at our board and are intrigued, others look at the post-it notes as archaic technology and feel that our process could be brought into the 21st century with some software package. While there may be a great software package for this, the eBook warns about this – all developers will have this reaction. Our Kanban board requires no licenses, doesn’t take time to boot, responds instantly to tactile input and is always on. Besides, there is no equivalent software package that allows you to ceremoniously march a stack of post-it notes around the room as they go live.

Happy coding.

Thursday, December 21, 2006

Props to Blogger

Blogspot has finally rolled out their new version. Good job. Better administration, tagging, faster publishing. Only suggestion: spellcheck. I've upgraded my template to take advantage of these new features, and I've even tagged most of my posts. I'll update Cameron's blog soon. I've been posting more to my blog internal to our company, but hopefully this new year I'll find the time to blog more. More pictures too. If you find yourself reading this, leave comments -- they're encouraging and a good prod in the butt to write more. Incidentally, I've got lots of posts I've started but haven't finished. I should get those out, shake off the dust and post 'em.... even if they are several years old.

Sunday, July 16, 2006

Windows 98 + high speed Internet = hilarious

Tip: If you have high speed Internet and you don't have a router or other physical firewall device, you are running at serious risk. Go buy a router!! Here's a story about how a simple OS installation turned out to be a nightmare...

My wife and I took a trip out east for my Mother-in-Law's 60th birthday. Anytime she was in our province, she'd always talk about her computer with such disappointment and how I should come out to fix it someday. Well, when I learned that their machine was completely unstable on Windows ME (gawd help me) and their tech-friendly neighbour downgraded them to Windows 98 (holy crap!) -- I knew I needed to help.

So I looked into buying a copy of XP. Strangely, an upgrade is $230 and an OEM copy is $110. The catch is that you have to buy a motherboard in order to qualify for the OEM version -- no exceptions. Even if I bought a cheapo motherboard it would still work cheaper than the upgrade. Hey Microsoft -- fix your pricing, that's stupid.

So instead of purchasing a new version, I decided that my aging computer that I haven't hooked up since we moved into our house a year ago was officially retired -- I could donate that copy to my Mother-in-Law without violating any licensing agreements.

I bought a cheap 80Gb hard-drive for $50 (+ tax, grrr) on the day of my flight out and committed to installing the new Hard-Drive and OS while I was there. The installation turned out to be the most-complicated OS install I've ever performed.

I learned that there must have been some serious security improvements to XP since the first release of the OS -- I received my copy the week XP was launched so a fresh installation needs a zillion updates. I also learned that either the high-speed Internet provider out east could care less about mitigating hackers or I just didn't realize how effective my Linksys Router is.

I ended up installing the OS three times:

The first install went flawlessly -- I kept the Internet connection unplugged until it was ready to download updates. As soon as I started up Windows Update, the initial fix (the background intelligent transfer service BITS upgrade) took forever to install. About twenty minutes in, I realized that some hacker had compromised the machine and in place of the BITS upgrade, a Trojan had been installed with the same name, and the root of the hard-drive was filling up with garbage files and executables, pop-up messages were launching: this pc was now a honey-pot for hackers. In frustration, I put the installation CD back in and rebooted -- screw this!

I wised up for the second installation and took some additional security steps. I changed permissions on the hard-drive, disabled simple sharing. I thought I turned off "File and Printer Sharing" but an hour later, I was screaming politely (in front of my father-in-law) and rebooted with the installation CD.

The third installation, I needed help. I borrowed a neighbour's DLink router and magically all the hacker non-sense stopped. Before I left Nova Scotia I convinced my father-in-law that they absolutely needed to have a router.

Looking back on it -- they claimed they had serious problems with Windows ME but fewer problems with Windows 98. Though I'm not sure how to qualify "fewer" because when I was backing up data for the re-installation their machine would crash two/three times per hour. In a way it was comical because each time it crashed my not-so-computer-friendly father-in-law would ask, "What causes that?"

I explained it the best I could -- "Running Windows 98 nearly ten years ago when everyone was on dial-up was considered "safe". Now that everyone's on broadband things have changed; security has changed. Imagine a bank with security practices from the 1950's that has no bars, silent alarms or security cameras... would you bank there?"

Friday, March 24, 2006

Holy Smoke!

While I was commuting back from Las Vegas, my folks were on the Princess Cruiseship freaking out. You hear this stuff happening, but it always happens to strangers. My folks are fine, but this is really unbelievable... CNN.com - Cruise ship fire survivors count themselves lucky - Mar 24, 2006

Friday, March 03, 2006

Recorded Media Center programs, meet Pocket PC

Dang!

Recently I bought a 2 GB SD card cheap from Future Shop ($99 CAN) so that I could start to take some of my recorded Media Center TV shows onto my Pocket PC. This has proven to be a bit difficult task...

First off, it's takes a lot of time to find a convenient way to transform Microsoft's dvr-ms files into something less... massive. One hour of recorded TV is roughly 2.5 GB. And -- everyone has got a piece of software to do the job for a price. I've test-driven several freeware applications, but you've got to be really frikkin patient to try some of these things out.

DVR 2 WMV has proven to be the quickest way to convert into Windows Media Format, where 1 hour of video takes about 20 minutes. Initially I had some problems getting the application to work, but after I turned on Compatibility mode, things seemed to work fine. (I've also got my eye on using the MSDVR Toolkit to automatically strip out commercials and convert into various formats, but it's not as straight forward as DVR 2 WMV)

The next big challenge is actually my PocketPC. I have an iPaq 3870 which was released back in 2002 as a Compaq product before the big HP merger. At that time, the 203Mhz, TFT screen, 64Mb internal memory, SD Card slot and built in Bluetooth was HOT. It was crazy expensive compared to some of the units available today. (Yet another pioneering effort on my part. You do realize that me buying this stuff at the early-adopter stage is what it makes it affordable for you people? You're welcome)

Although I've used my PDA fairly frequently, I even wrote some applications in Compact .NET framework, it's become more of a second-class peripheral on my cluttered desk. I use it mainly to display my calendar for the day, and since my Laptop has Bluetooth, I don't even have to worry about hooking it up in the morning.

Sadly, it's also only running PocketPC 2002 which has Window Media Player 8.5 for Pocket PC. Guess what format DVR 2 WMV uses? If you guessed Windows Media 9 you would be absolutely correct. Enjoy a coke on me.

So, at long last, I finally found a reason to download and install The Core Pocket Media Player, an opensource media player for pretty much any portable device. It can play most media formats, including divX. But I was shutout again! It looks like the Windows Media Support for TCPMP is based on the windows media codecs installed on the device. Back to square one, either find a way to acquire the codecs, or look at finding another tool to encode the DVR-MS files.

Although I wasn't able to find the codecs as a separate download, the real killer is that you can't just download Windows Media 9 for Pocket PCs -- you have to upgrade your OS, which is... sub-optimal. Turns out you can't just walk into a store and buy a copy of PocketPC 2003, it can only be upgraded through the manufacturer of the device. Specifically, the ROM has to be reprogrammed. Do you think HP is going to have a ROM for a 4 year old Compaq device available for purchase on their site? Rhetoric question answerers may drink another coke for answering no.

Then, on a whim, I tried E-Bay. I was pleasantly surprised to find the ROM for only $2.99 US, free shipping. Skip the afternoon $2 coffee, have that sucker shipped to my office, pronto. So, we'll see what an upgrade brings. Maybe I'll find myself with a renewed interest in my Pocket PC. Or, as a late-adopter, I'll break down and buy an iPod like the rest of you (thank you for making that stuff affordable)

Friday, February 03, 2006

DVD-Decryptor is dead, now what?

I'm really interested in taking DVD content and ripping it into a format for my portable media player, but I came late to the game -- the fan favourite DVD-Decryptor is out of business, simply because their application could break copyright encryption.

But ShrinkTo5 may stand as the next successor to DVD-Decryptor. I've heard rave reviews about this tool, and I need to find some time to play with it.

The big difference: it doesn't break CSS encryption. Well, at least not on it's own.

A popular DeCSS encryption dll, "machinist.dll" can decrypt the CSS... if this dll was included in the same folder as ShrinkTo5, it'll load the decryption algorithm and decrypt on the fly.

A quick google and I was able to find a location that posts the machinist.dll for download. Remember, decrypting is not illegal in all countries.

Thursday, February 02, 2006

....set the building on fire....

I can understand why Milton Adams (Stephen Root) of Office Space would be up in arms about his Red Swingline stapler gone missing. My brother-in-law bought me the stapler for christmas as a joke. But make no mistake, this heavy-duty diecast metal stapler is all business. I'm able to punch documents that would make normal staplers roll over and die.

Tuesday, January 24, 2006

Google Sightseeing

Google's social influence meets world tourist.

Monday, June 20, 2005

Das Keyboard

Ya, you typing.

Friday, February 18, 2005

Movin Madness

This month our client migrated their servers to another environment. When the actual date for the migration was upon us, it felt a lot like the Moving Van had arrived at the client's home and he was in his bathrobe frantically trying to wave it off for a few more days. For the most part, the server migration went fairly well, with some issues (big and small). I've outlined a few of them -- some of which drove me crazy. MSXML 4 - Access Denied We've got a neat little flash microsite that pulls an xml feed from an external site using classic asp. Interestingly enough, the simple ServerXMLHTTP method .Send() for a simple URL was returning an Access Denied error. Turns out, this is a feature of security hardening in MSXML4 SP2. I had to change the Local Security Policy, add the URL to the Trusted Sites internet zone in Internet explorer and REBOOT the server. Quite a bit of hassle just for a xml feed. Cannot resolve conflict in Collation Restored databases from the old production envrionment onto the new environment, and found that some applications weren't behaving as expected. When poking into the error, I found that i was receiving an error based on the current Collation (the language and sort order of the database) between databases were different. This was probably because the regional settings between the machines were different, and the databases that were created on the server defaulted to an incompatible. To resolve I had to: 1) Create a new version of the database with a different name 2) Use an ALTER DATABASE statement to set it to the desired collation. 3) Script the original database into a single block of SQL DDL statements 4) Remove all collation specific references on fields as the script would try and create varchar fields with specific collations. 5) Use a DTS task to copy the data from one database to the other, specifying in the task to Use Collation so it would adopt the collation of the target machine. 6) Drop the original database and rename the new version to reflect the original name. Cannot enroll in new Transaction Brilliant. Originally I asked the new hosting provider two months ago if they had any best practices on how to configure an environment with a firewall between the database and web servers. The only answer I received from their tech team was to use port 1433 -- which in lamens terms, is like saying cars need gas -- SQL always uses port 1433. The problem I knew we were going to have is when you actually try and use distributed transactions from the web server: there is a lot of communication between the web server and database -- way more than just port 1433. When I found out that they weren't aware of this concern, that should have been my first guess. I gave up that the hosting provider was going to make this easy for me, so I provided them very clear instructions on how I was going to configure DCOM to restrict the web and database servers to use specific ports. I clearly told them that once this was done, I would need two way (inbound/outbound) communcation on these ports. I outlined very specifically which ports i needed TWO WAY communication. When I recieved an email confirmation that they had opened the ports for TWO WAY communication, I politely thanked them and went back to the process of configuring my applications. When I received the unable to enlist in new transaction error, I was a bit suprised, but as I didn't have a whole lot of time to fully test the application in the new environment, I wasn't that surprised. I thought I might be having problems with incorrect registry settings, or name resolution, etc. It was about fourty minutes later, after double-checking my settings and reading knowledge base articles on this problem, that I discovered that the ports had been opened for the web server, but not the database. The email I sent the hosting provider, to which I attached my previous email with the clearly outlined instructions, retrospectively, wasn't that polite. I only wrote SOME of the email in ALL CAPS. (Incidentally, why is it THAT ALL CAPS LOOKS LIKE YOU'RE SHOUTING?????) Unable to convert varchar to datetime When I realized that the default regional setting of the server didn't really help, I went digging into the code. We had a form that collected the data in a very specific format: Please provide your date of birth (yyyy/mm/dd): At the code level, some very ancient asp classic code that was building the sql statement inside the script (terrible!!!) and opening the recordset with the target SQL. strSQL = "SELECT count(*) from myTable where DateCreated = ' " & Request.Form("TimeStamp") & " ' " oRs.Open strSQL, oConn Brutal. Here we're basically asking the SQL server to try and resolve the text into a datetime using the text format the user supplied -- if your SQL box is configured with a different time format, you're pretty much screwed. While writing inline sql inside your presentation code is considered extremely bad form, I can appreciate the developer's complaint that it's too much work to write a custom COM object just to write a silly database call. But if you have to use inline sql, you should at least attempt to use a stored procedure. If you so lazy that you can't write a stored procedure and you have to write inline sql because you're a toe head, then heaven forbid you write a few extra lines of code and use a parameterized sql statement, as so: strSQL = "SELECT count(*) from myTable where DateCreated = ?" Dim oCmd set oCmd = Server.CreateObject("ADODB.Command") oCmd.Parameters.Append oCmd.CreateParameter("TimeStamp", adDBTime, adParamInput, 8, Request.Form("TimeStamp") ) oRs.Open oCmd, oConn, 1, 3 Although it's a couple extra lines, I sleep better knowing that ADO will take care of the datetime format conversion.

Wednesday, January 19, 2005

the countdown goes askew!

Lori and I have been counting down the days to our house closing on a chalkboard in the kitchen. Our real estate agent just called us and asked how we felt about moving the closing date up by a month! I'm all for moving in early, but we've already paid for our March rent. We're trying to figure out a way to maximize our move-in time while minimizing the money going out.

Thursday, December 02, 2004

Buying a house in Toronto - Part IV

Wow, part IV, like it's "A New Hope" or something... So with offer in the works, our agents worked furiously trying to contact the seller's agent and to setup a face-to-face presentation of the offer. The idea is that it's harder to laugh in someone's face than it is to a piece of paper. Once the offer is set into place, if accepted, we only have five days to get everything sorted out: financing, insurance, lawyers, house inspection, etc. So we spent most of the day trying to get a jump on all that paper work. Our agents managed to co-ordinate a face-to-face meeting for around 7:30... so the plan was to meet a nearby restaurant, sign some more formalized documents (a zillion times times four), and then have dinner while they negotiated. However, when we arrived at the restaurant, the seller couldn't contact his wife in time, and she went somewhere with their kids for a few hours. Since they both owned the house, they both needed to be there. She wasn't expected home until around 9:30. All this meant was that our pins-and-needles tension would only be dragged out longer than we expected. We had dinner, went home and sat by the phone and waited. Around 10pm, the phone rang. There were a couple things that had to be hammered out, ranging from the price to the alarm system. They had come down a little bit in the price, which was expected. Interestingly, they were waiting to sell the house before they started looking for a new home -- so they wanted additional time on the closing date. As first time house buyers, this was to our advantage. Now the ball was in our court; we only had an hour to decide. We could come up as much as they had come down, and that could go two ways: we would probably have another round of back-n-forth, or it would piss 'em off and they'd refuse our counter-offer. If we could pick the right psychological number, they'd be more inclined. We came up to the lower half of the halfway point, pushed the closing date out.... and waited for our agents to call us back. Around 11:30, the phone rang again, the offer had been accepted. According to our agents, the husband wanted to sell the house and his wife didn't. They hadn't begun to look for a house yet, and were waiting to see if they could sell the house before Christmas. As soon as the offer was accepted, she went white as a ghost and began to ball her eyes out. The only thing left was the house-inspection...

Buying a house in Toronto - Part III

As we returned to the city from our weekend getaway, we decided to take another route and do a drive by on the semi to get a better feel for the surrounding neighbourhood. Turns out, there were several more houses for sale in the general area. When our Agent called us the following afternoon, they had already looked at eight other homes that were listed. "Zero for Eight" -- all weren't even worth looking at. So we gave our agent the list of additional homes we looked at, and they went to work trying to set up appointments. Since the houses were in the same neighbourhood, we'd start at the semi and go from there. The asking price for the semi was higher than the farm house, but after taking a long second look, Lori didn't want to look any further. Although slightly smaller in size, it would not require any rennovations whatsoever and it had a spacious garage connected to a shared laneway. So we stopped, and decided to find a place to talk about it. We somehow found ourselves at the scummiest coffee shop in the seediest area. The working girls and drug dealers turned tricks while we sat inside and talked. We found out later that a new shopping mall with a more reputable coffee shop had plenty of room only a few blocks in the other direction. Oddly enough it didn't bother us. After some long discussions, my reservations with the place were put to bed, and we decided to put in an offer. It's funny how my negatives about the house seemed to disappear when we spoke of putting the offer in nearly 18K below their asking price. In Toronto, most of the houses sell well over the asking price, and that's mainly because of bidding wars, etc. However, this time of year is the best time to look mainly because no one wants to look / move / sell during Christmas, and in some cases, the market drops dead around Christmas and starts to pick up again around February. Once February rolls around, the prices start to inflate dramatically. So it was now or never. We put together the offer, listing all the items that would be included and excluded in the house, and any addtional conditions we could think of. Then we had to initial the documents in about a zillion places. We took a risk and decided to make our offer below asking price, and so that the seller wouldn't get pissed off, we took the appliances out of the offer. ... we were sold, but the question on would the seller agree to our conditions?

Wednesday, December 01, 2004

Buying a house in Toronto - Part II

After a few weeks of driving around and getting a feel for price ranges for neighbourhoods, we called up the Agents we worked with last year. They're a husband and wife sales team, great people, and parents of a friend at work. The tag team duo works well because they split up in the mornings to cover more ground. By mid-afternoon, they've narrowed the search down to a few good candidates. There's a whole range of emotions that you go through with each house, and each one is wildly different. On the outside, some look like solid homes with lots of potential, but the inside fill you with terror that they'll collapse at any second. But the personal favourite are the houses that have that 15 degree slant to them, and the brand new kitchen they've installed to distract you from the slant has been custom fit on that angle. What we didn't expect, was that we would find a house we liked on the first night out. In fact, it was the second house we looked at. It was a detached three bedroom with formal dining and living rooms and finished basement, built in 1913. It had amazing character, completely rennovated to retain the original charm of the home and within our price range. Only drawback: it was a bit further north than we were accustomed to, in a neighbourhood we weren't crazy about, and no parking. Everything we looked at after that, was compared to this home. We went back for a second look, and aside from the neighborhood, we were sold. Another house, a few blocks south, was the complete opposite in character and charm. It was probably 65 years old but the owner worked in construction. He had spent the last 13 years rebuilding the house from the inside out, and had furnished it with top of line everything. On top of having a laneway in the back, he had build a two car garage with ten foot ceilings-- an oddity in the Toronto market. The craziest part of the house was the fact that the current owner's tastes were ... how to put it... ok, awful. The house was uncomfortablely crammed with tasteless junk, almost garish, and it made it difficult to see the value in the home. A few days later, convinced we were interested in the old farm house, we took the Friday off and spent the morning driving around in the neighbourhood. There's lots of construction in the neighbourhood, new condos and townhomes going in -- the neighbourhood is sure to change over the next five years, but still, we weren't convinced. Lori wants to have kids, and she couldn't picture herself taking the stroller out by herself. We were out of town for the weekend, and gave us some time to think about it. The more we thought about it, the old farm house would eventually need work, and somethings just weren't going to change.... ... could we trade the warm detached farm house filled with books and landscaped perrenial gardens for the garrishly decorated semi (complete with disco ball) and two car garage?

Buying a house in Toronto - Part I

Last year, after the initial fuss of our engagement had settled down, Lori and I decided to buy a house. As house prices are high, the price tag of a wedding and a house purchase seemed prohibitive, so we thought we might buy a house and throw the wedding there. It didn't take long for our optimism to fade, and we quickly became very discouraged. Discouraged doesn't come close to describing how messed up buying a house in the city actually is. So the dilemma is: do you buy a house in the city and pay through the nose; or do you save your money and buy a house outside of the city and spend all your free time commuting. Personally, I enjoy sleep too much to have to get up early to take commuter trains, and I love the fact that TTC allows me to come and go from work to home as I please. Buying a house in the downtown area borders on madness. The houses you can afford either have three kitchens in them and require major renovation to make the space livable, or they are so small that it would be a complete change in lifestyle. Finding a balance between them is difficult, and often is a matter of timing more than anything. The problem Lori and I currently have, is that the neighbourhood and apartment we're currently in is amazing. Well, at least we think it's amazing -- we may look back years from now and have a good laugh at it. In short though, we're in an $800K home in a $800K neighbourhood. Anything that we move to will be a step down: two bedrooms, two dens, two bathrooms, large spacious kitchen, large master bedroom, patio, backyard, private parking, all utilities included and about 100 feet from transit. We really wanted to buy a home in our neighbourhood, so postponing the purchase allowed us to save a bit more for a down payment. So, a few weeks ago we started up the house hunt again. This time, we spent a couple weekends driving around, trying to get a feel for the surrounding neighbourhoods. We saw lots of listings in our area, and we looked them up on MLS, but quickly discovered that even the smallest dumps in our area were beyond our price range, and if we could afford it, we'd be burying ourselves so far under that our lifestyle would exist to support the house. We made a trip out to the east end of the city, which is popular for first time home buyers. The east end of the city is completely foreign to us, and to make matters worse we were driving around the east-end on a weekend that the Don Valley Parkway was closed for maintenance. We spent nearly an hour on Queen St East, and got a really good whiff of the garbage processing plant. Unfortunately for the east end, we resolved that we were not east end people, despite how pleasant everyone makes it sound. We decided to focus our search in the west end of the city, just outside of the downtown core. We expanded our search area to our surrounding neighbourhoods, compromising on the location for price. .....

Tuesday, November 09, 2004

Halo 2 and FireFox released today...

Is it by coincidence that MS released their flagship Xbox game Halo2 on the same day that the heavy-hitting open-source browser FireFox shipped version 1.0?

Hard drive on the fritz

Problems with my PC, yet again, seems like any time I want to do anything productive, I have to rebuild my computer first... This time it's either the SATA 120GB Western Digital hard-drive or the Promise SATA 150TX2Plus Serial ATA Controller card. I've had them for about 4 months, and haven't suspected anything wrong... Clearly, I have problems with the HD: files appear to be corrupted, and about every other boot XP decides to scan the disk and correct problems with invalid entries in indexes, corrupted attributes, orphaned files. Each time it tries to resolve problems, more files appear to become corrupted. My data drive and OS are installed on the same physical drive, but on different partitions. The problem is spanned across both partitions. Last night, the system became completely unbootable. In Safe Mode, CHKSDK produced the "This volume has one or more unrecoverable problems" error. I reinstalled the OS on the same partition without formatting (I needed some data off that drive) I'll try to back up my data drive, purging old data as I go and then do a full format of the drive. I've also noticed that newer drivers are available for the SATA Controller card, which are digitally signed: the version I'm currently using is not. I left the house this morning with a disk diagnostic running -- it should be done by the time I get home. If the drive checks out fine, then I'm going to blame the SATA controller. The irony of this is -- about two months ago, I gave my mother my perfectly good 40Gb Western Digital DMA/133 drive. Hopefully I can get things back up and running without having to buy an external drive or something.