Archive for the ‘Ongoing Education’ Category


Thanks to all who have already registered for Virtual Fox Fest; we appreciate your support and our speakers are excited to share all their sessions with you all soon. There is less than a month until the first day of the conference!

That said, we know there are procrastinators who have not registered and have good intentions to do so before the conference starts. Please, please, please do so sooner than later. Our hard working registration staff of one is also a presenter who is practicing and refining his session, and has more than a full-time job working on customer projects.

In case you are procrastinating, please register for Virtual Fox Fest before October 1st. Three good reasons:

  1. Save yourself $50!
  2. Our registration staff has a super busy October and would appreciate you saving $50. Not joking.
  3. If you wait until the week of the conference to register, you might not get your credentials until just before the first session on October 14th. You’ll miss all the details about the conference we’ll be sharing over the next few weeks, as well as chance to read white papers and download the examples before the conference starts.

It literally takes just a few minutes to register. We’d hate to see you miss the opportunity to get a head start on all the goodies. Go get registered! Now, really, don’t waste another minute. Here is the link:

Virtual Fox Fest is October 14, 20, and 26, 2021!

Only 22 days until we gather via the Internet. I look forward to seeing everyone again.


This is a true story of a day in the life of several software developers (one who proudly and regularly declares #IHATEHardware) and a hardware/networking professional, and one of our customers who will of course remain anonymous for obvious reasons. That said, I share this story of lessons learned and reinforced in hopes that this happens to no one else and that it encourages you to help others protect their data assets so they are not taken to the edge of losing their business.

My days normally start out around 8:00am because most mornings I like to sleep until something naturally wakes me up. Most days it is construction noise in the neighborhood, my wife’s alarm, or the dog, but on July 16th it was a phone call from Frank Perez who is one of my team mates at White Light Computing. It was a very early 6:15am. I was waking up out of a dream where I was in a stadium of people and there was an earthquake happening (probably something in the 5.0 range, which was kind of cool). In my dream my phone was ringing too. Surprisingly, I answered it and it was Frank who started talking about the details of an investigation he was conducting based on a slew of error reports overnight from one of our customers. Normally the error reports are related to the network failing, which is reported to the customer’s IT Director. But the error reports started early and were “not a table” errors. Frank connected to the server were the data was located and tried to open up the tables in the error reports. They failed to open up. Upon further inspection he found them encrypted, and in the folder also he found two files:

1) How_to_decrypt.GIF
2) How_to_decrypt.HTML

(Note: the instructions in the two files are not the same. The HTML made me quite nervous as it could have active content. I do not advise opening up this file in the wild just to be extra safe.)

Frank suspected that someone opened up and unknowingly installed Cryptolocker or one of the variants. This is the second time in a few weeks Frank has seen this at a customer site, but at a different customer (who literally had no backups). Based on the time stamps, Frank was guessing it started between 8:00 and 8:15pm, the night before. So it has been running for 10 hours. My experience and the research I have done on Cryptolocker was that it isolated itself to the computer it was installed on. This is the first time I’ve heard it jumping from a workstation to the server. The day was going downhill quickly.

Here is an image of the How_to_decrypt.GIF:

A kings ransom

Something you never want to see on your computer!

(I’ve blurred out a couple of things in case it will identify our customer)

This was not how I was expecting to start my Thursday. I formulated a plan to contact key people and then head into their office with Frank. I talked to the owner of the company who I learned was out of town and a couple of time zones away. I talked with the IT Director who was away on vacation to get the low down on the backups and where they were. I know that without the data people are going to be doing a lot of manual work, and most of the workers won’t even be able to do their jobs. Awesome news: a backup of the server is taken at 5:00 each day. Sounds like we might only be missing a few hours of data and the workers who are working between 5:00 and 8:00 are using the apps with SQL Server and not the DBFs so things are really sounding like it might not be as bad as I originally considered.

For those who have not been introduced, Cryptolocker (aka Cryptowall, CryptoOrbit, and Cryptolocker 3.0) is ransomware and it is not fun at all. I have seen this too many times in the past couple of years at customer sites. Although it behaves like one, this “software” is not a virus; it is a root kit that establishes itself on the computer. It installs itself via socially engineered email attachments that can fool even the savviest of computer user who know better. The software installs via a link from the Internet. It then calls home to get a key and begins to encrypt files with predefined extensions, which started out as MS Office extensions, but it has been expanded (oddly, INI and XML are not on the list). Unfortunately Visual FoxPro data files fall into the list. The process encrypts the files one folder at a time. The first variant of this software stuck to the local computer. So if someone opened the attachment and followed the link only one computer was affected. Still, for some of our customers, this can be bad enough depending on the computer that gets hit. But this latest variant now hits mapped drives so files on a server or another computer in a peer-to-peer network can join in on the fun. And the performance is very impressive as it had all the files in the data folder on the server encrypted in less than 20 minutes.

I learned Thursday from someone who recently tested six of the most common anti-virus and malware programs, not a single one found it on an infected machine. The day gets worse.

There are two ways to get your files back: restore from backup or pay the ransom and decrypt the files using the key returned from those holding them hostage. If you have good backups, it might not be too bad depending on the timing of the backups. I was thinking it would not be a problem as there are daily backups and we had the most recent a few hours before the attack.

So back to the 7:00am hour, I’ve contacted a couple of people on my team who helps support this customer, the key players at the customer site and headed into the office.

Once at the office we met with the newest member of the team who is the new hardware/networking tech for our customer. Frank explained his findings and our hypothesis. The tech has recent experience with the newest variant of Cryptolocker, confirmed Franks conclusion, and gives us the low down on what has happened, how this ransomware works, and what we need to do.

Developing the plan of attack:

  1. Disconnect each computer from the network in case of propagation. Kill the wireless so no laptops and other devices could connect to the network.
  2. Search each computer for ransom files starting in the room that was working around 8:00 last night to find the computer that is doing the encryption (“patient zero”) .
  3. Remove the computer from the room.
  4. Verify problem really is what we hypothesized.
  5. Determine the damage on the workstation and the server.
  6. Step back and develop the recovery plan

The approach, the collaboration, the planning, and the implementation of the plan reminded me of how firemen approach a fire. If you follow a fire truck to a fire you are likely to witness something that at first seems disturbing. The truck stops and the fireman get out. They are not running around. They are methodically executing a plan, which to the common person might seem to be working at a slower pace than is needed to get the fire out. As the fire rages in the building, the fireman get their gear, strap on an air tank, they put ladders up and get on the roof, they pull the hoses off the truck, they attach to the fire hydrant, put on their air masks, some start cutting holes in the roof and others start throwing water on the fire. Often the fire is out in short order. It is because of the planning and training, and implementation of the plan that things work so well. This is how we worked to find the troubled computer and determine how to get the customers back to work.

Finding the machine that installed Cryptolocker turned out to be simple as all we had to do is search for the file names above on the C: drive, and possibly other drives on the computer. In this office there are close to 50 computers, so the tasks took a little time with three of us unplugging and searching. We found the troubled computer pretty quick. Murphy’s Law would have dictated locating it show up on the 50th computer, but instead it was one of the first.

The fact is: we considered paying the ransom to get the server back to normal. The people cost involved to rebuild the server and restore the files was much more than the ransom. Obviously one has to understand the ramifications of giving money to the criminals. But what if it was necessary? I’ve talked to several of our customers who have been hit and several other colleagues who have customers, who have been bitten, and sometime the backups are not good enough and the money needs to be paid to stay in business. It is these kinds of moral dilemmas that can keep one up at night.

We started looking into it and really thought through the process to the point of getting a spare laptop and potentially sacrificing a MIFI device to get to the hacker’s Web site and instructions. We did not really know if something that connects would get infected and to the potential affects it can have on the hardware used. Even the thought of searching and connecting to something like the FBI site in search for keys was scary to me. Who knows what fake sites could be setup. We also have read and heard that Cryptolocker can get installed just by visiting a URL. So we did not take any chances. Before we got started, we realized that the ransom note stated a 1 to 10 day turnaround on getting the data back. We were not sure if this meant 10 days to get us the key, or 10 days for the solution to decrypt all the files it encrypted. Additionally, the ransom required bitcoin as payment, and getting bitcoin currency was new to all three of us. So we left that as the last resort option and moved forward with the better plan.

Second plan of the day:

  1. Determine the ransom and steps to pay it (last resort).
  2. Update the customer on the situation and explain the ransom, and what we need to do. Get permission to pay the ransom as a last resort.
  3. Build a new virtual server to replace the virtual server with the encrypted files. We wanted to leave the old server intact in case something was important in the restore of the new server.
  4. Restore backup from previous day to the new server
  5. Reconnect the workstations to the network, and test the systems
  6. Get home in time for dinner (not really in the plan, but if all went well…)

Rebuilding the server was not my thing (remember #IHATEHardware), but Frank and the networking tech don’t mind and get started. The IT Director has the Windows Server ISO and keys staged for us to use. Hyper-V and the ISO make short work of getting the server operating system installed. But low and behold the keys do not work. It turns out the server is R2 and we have keys for something else. We look for the proper ISOs and key combinations. We found a stash of DVDs with different versions. Several hours later, we download the proper ISO to match the existing virtual server and get it installed. Still enough time to get the backup restored and everyone home for dinner.

The backup is restored. We poke around and see quite a few files missing including DBFs, CDXs, FPTs, EXEs, DLLs. Some folders have all the data in the data folder, but are missing the EXEs in the application folder. Some folders have the EXEs, but are missing the runtime files. There was no obvious pattern.

The network tech dug into the backup software and came upon a revelation we restored a differential backup. Ah, perfect, so we have more work to piece the restore back together. First we have to find the last full backup and then restore the differentials after restoring the full. More work, but an easy enough plan of action. Our customer has four solid state drives rotated as the backups (fifth daily is on order to replace previous fifth one), each capable of holding 680GBs. Fortunately, earlier in the day our customer’s onsite developer requested the Controller bring the offsite drives back to the office in case they were needed. Perfect, a plan was working. Then the new networking tech delivered news that was about as devastating as Frank’s original find of Cryptolocker. The the last 16 days backups were ALL differentials. He could not find the last full backup.

I placed a call to the owner to explain the situation, and a second one to the IT Director who explained where to find the full backup. Unfortunately what he pointed us to was the differential backup we used. You could feel the room deflate. As you can ascertain, we effectively have no backup. Holy cow. My stress level just raised up a notch. Earlier in the day the IT Director told the owner there were three options:

A) Restore the backup
B) Pay the ransom
C) Pack it up and go out of business

Going back in time…

Many years ago when we needed a test data set we would ask the previous IT Director and she would give it to us a day or two later since she had to restore from tape. The restoration process was a pain in the neck and resource intensive. So to help us out I asked Frank to develop a rudimentary backup process to run nightly at midnight. This process copied key files to a folder on one of the computers that is not the server. It was never intended as a full backup or part of the disaster recovery process. From time-to-time the old IT Director would recover files we backed up because it was quicker than the restore from tape. We benefited from this by grabbing the backup for our test machine.

One of our contractors happened to be in the office on Tuesday and grabbed a copy of the data from Monday night’s backup for some testing he needed to work on. He does this every so often when he is in the office, but he is not there every day and has been known to take long vacations. Earlier in the day I asked him to secure that backup just in case it was needed, but not expecting to ever need it.

A few years ago I requested a test machine to create an isolated environment for the customer to test our application changes. The owner has so much faith in us that he prefers to test in production. We know better and never have that level of trust in ourselves. After many requests and some serious push back and flack from the current IT Director, we got a test machine, which is a different VM in Hyper-V. The last major testing we did was last August. But at that time we had refreshed the entire VM from production.

Back to solving the problem…we knew we had more options than the IT Director.

  • Restore the backup
  • Rebuild the backup from Tuesday, restore previous night, and leverage the test machine.
  • Pay the ransom
  • Start with a baseline from last August from test machine
  • Absolutely no talk of going out of business, yet

Our biggest concern was that our backup from the night before was taken four hours after the encryption process started. But one thing the Cryptolocker cannot do is encrypt files that are open. It just so happens one or more people left an application or two open and had some very important files open. Mind you, corporate policy states the employees close all the apps before leaving for the day. So, because someone violated corporate policy, our backup was able to back up some really important files. Sure, these files would have been on the nightly backup from 7 hours earlier, but we had even fresher data.

We ended up implementing plan B and it worked. We restored the Tuesday backup. We restored the previous night backup and we restored our midnight backup. Still, 77 DBFs were not restored. We used Beyond Compare to help determine the missing files (thank you Scooter Software for the best file/folder comparison software around). It turns out that many of the tables were static, some temporary, and some could be rebuilt or ignored completely. We used Beyond Compare to move over the missing files from the test machine to the production server. The three of us then grabbed the remaining files like the latest EXEs and runtime files from our machines to fill in the gaps.

Sure, it is not perfect as some of the data was from August of last year, but we know that we have all the key things covered and the core data is the latest and greatest.

I texted the owner the good news and told him I would be in the office before they opened for business on Friday. We left at 10:30pm.

Friday had a few glitches here and there (mostly because we missed some of the Visual FoxPro Reporting APP files) and a couple of machines that relied on the wireless access could not be used until we checked out all the laptops that were coming in from the satallite workers. The only machines affected were patient zero and the file server.

Lessons to reinforce/learn:

  1. Backup, backup, backup,
  2. Full backups are better than differential
  3. Differential backups rely on a full backup.
  4. Test the backups
  5. Have multiple generations of backups
  6. Multiple kinds of backups (daily, weekly, monthly)
  7. Multiple storage methods for backups (disk, mobile disks, offsite and onsite, cloud)
  8. Review the processes and the disaster recovery plan periodically.
  9. Refresh the test machine with production on a more regular basis.

It pays to be lucky

We absolutely lucked out this week. We lucked out because our contractor was in the office on Tuesday and grabbed a backup. He easily could have been on vacation like so many people this time of year. We lucked out because we solved a pain point years ago to create this backup in the first place. We lucked out that Frank and the new network tech had some recent experience with Cryptolocker. We lucked out the network tech is very bright and works well with the development team (IT Support and developers do not always get along in my experience). We lucked out we have a test machine that had the rest of the files. We lucked out that one or more employees violated corporate policy and had the apps open, which normally gives you fits trying to back up file. We lucked out our backup process has the intelligence to back up open files. We lucked out that our customer had faith in us. We lucked out that we could deliver a working data set. Our customer lucked out that he is back in business so quickly.

I mentioned that our customer had faith in us. He told me on Friday that his IT Director did not think we would be able to fix this. His daughter, who works in IT at a local community college, did not think we would be able to pull the Phoenix from the ashes. I explained to our customer, from time-to-time during my career we have relied on pulling off an “IT Miracle” and each of us are limited to the number of miracles we can pull off. This past week I used up another one. Yes, there were other options, but each of those options is not as good as the ones higher up on the list and each of the other ones had higher costs to the business and long-term ramifications. And one of the options meant giving money to criminals, which is a decision you cannot put a price tag.

The real sad thing about this is there is no protection from it happening again. In fact, it could have easily had more than one computer attacked. The same email could have been opened by more than one person. The same email could arrive tomorrow at the office, and is certainly being delivered each day to other people around the globe as you read this post.

Thanks for taking the time to read our story of how one company went to the brink of disaster and survived to talk about it. I hope the lessons learned and lessons reinforced trigger action on your part to review the disaster recovery plan. If there is no plan, I hope you take the time to make one. Also, take the time to discuss this with your customers. Leave no one behind.

To the entity in charge of my count of “IT Miracles”, please grant upon me double the count I have remaining today. I’m certain this won’t be the last time I need to count on one.

Thanks to everyone who helped out that day. The teamwork was amazing! I never have to be reminded of how great a team we have at White Light Computing. Last Thursday the team shined brightly. We also have a great customer and a new found friend (the networking tech) who I look forward to working with for many years to come.

It is late spring and that means one thing around Geek Gatherings LLC, time to get the registration opened for the Southwest Fox and Southwest Xbase++ 2015 conferences. Super-Saver Registration, which saves you $125, is available only through June 30th, so don’t wait.

As Doug, Tamar and I have explained before, putting on a conference is a risky endeavor. Conference centers require a guaranteed minimum income to block the dates of a conference; for a conference like Southwest Fox and Southwest Xbase++, that minimum is in the tens of thousands of dollars. We have to confirm our commitment to the conference center by July 2nd and need your support by July 1st to make that commitment.

We won’t be charging credit cards or deposit checks until some time after we make the “go” decision. So there is no reason to hesitate to get registered immediately.

In addition, as we recently said, if Southwest Fox Super-Saver registration is strong enough, we’ll add some speakers and topics.

Here is what you can expect from Southwest Fox and Southwest Xbase++:

  • Two simultaneous conferences for the price of one.
  • Terrific selection of sessions from great presenters.
  • A total of 25 regular conference topics between the two conferences, 5 pre-conference sessions to choose from, and a keynote to pack your days with learning opportunities and inspiration.
  • White papers from every session (mandated by the organizers) so you can read about sessions you can’t fit into your schedule, or review material you saw at the conference when you return home.
  • Lunch Thursday if you register for two pre-conference sessions. (You can also purchase lunch Thursday at our cost.)
  • Lunch Friday and Saturday for all attendees.
  • Dinner Friday night.
  • A free pre-conference session if you register by June 30th.
So please register soon, and encourage every Visual FoxPro and Xbase++ developer you know to register, too.

Southwest Fox sessions:
Southwest Xbase++ sessions:
Get added to our email list:

Only 136 days until we gather in Gilbert! I hope to see everyone there.


Over on the Southwest Fox blog someone asked the following question in the comments for our post announcing the Windows 8 keynote:

Why have MS promote their products when they don’t care about developers? MS discontinued VFP.

Comments often get lost, and sometimes people read the posts without looking at the comments, so I am going to post my answer here.

It is a good question. This is not about promoting Windows 8, this is about educating Visual FoxPro and Xbase++ developers on a new operating system their customers are eventually going to consider and use in their businesses. You need knowledge to help guide your customers and users. This session is going to help you learn the advantages and pitfalls of Windows 8 and how it is going to affect your customers’ business.

Jennifer is not a marketing person, she is a developer and has lots of good information to share with other developers. This keynote is about helping developers get past the pundits and press, and down to the nuts and bolts of the next OS developers have to consider when deploying applications. Plain and simple.

What is one of the most common concerns about the future for VFP developers? Answer: will my applications continue to run on the next Windows?  Xbase++ developers want to know new features their applications can work with too.


Several people have asked me to clarify the following tweets I made last week.!/rschummer/status/76152986992254976!/rschummer/status/76280482454708224!/rschummer/status/76286820048044032!/rschummer/status/76289027896115200!/rschummer/status/76301141113188352!/rschummer/status/76326750010867713

I was mostly tweeting to a couple of co-workers who wisely passed on the workshop, but it raised some interest of some followers. At the time I did not want to reveal what workshop I was in hoping it would get better, but now that it is over and I found it disappointing I thought I should share my thoughts. I do this in case it will help other decide if the session is worthwhile to them or not in the event Microsoft decides to do more of them around the planet. Since 140 characters is not nearly enough… here is my story.

This week I attended a one-day workshop from Microsoft called WebCamp. Specifically a special WebMatrix and ASP.NET MVC WebCamp hosted by a couple of Central Region Microsoft Developer Evangelists. When I signed up for this workshop the agenda stated the following:

  • Web Stack Introduction
  • Building a Site in WebMatrix
  • jQuery Fundamentals
  • ASP.NET MVC Introduction
  • Migrating from WebMatrix to ASP.NET MVC
  • Instructor-Led Labs

Since Microsoft is marketing the free WebMatrix to my customers as a simple way to publish Web sites I thought I might get up to speed on the tool. I also anticipate some of our customers potentially hitting a wall with WebMatrix and asking us to migrate to a more robust solution. If there is an easy path to ASP.NET MVC, and that is something we can use to help them, all the better for me to attend this session.

I should state up front that I had very low expectations going into this workshop based on the past history I have had with Microsoft developer workshops. Mostly because I walk out feeling like I just listened to mostly marketing-speak and a lot less technical-speak. That said, even the worst workshop I have attended I have walked away with something of value that allows me to justify at least part of the time spent. I also have a history of being let down by Microsoft Developer Evangelists (with the exception of a couple of exceptional ones like Jennifer Marsman who is in our region).

I also want you to know the Southfield Michigan (suburb here in Detroit) workshop was not the first time this session was given. The room was completely full with approximately 80 people. I would say the venue was completely “sold out”. Also, the WebCamp was free to register.

A couple of days before the workshop we received an email noting we should download and install:

  • Microsoft Visual Studio 2010 (get the trial)
  • Download and Install WebPI 3
    • Install WebMatrix (via WebPI)
    • Install MVC3 (via WebPi)
  • Download and install the Web Camps Training Kit

The email arrived a couple days before the holiday weekend. I fortunately did not get this email, but one of my co-workers did and it took her hours to download and install everything. Hours that were taken away from doing billable work. These were necessary for the marketed hands-on workshop. (more on this in a minute)

In fact on the WebCamp Web site it states:

A little pre-work will go a long way. Your only homework is to make sure your machine is setup ready to go and you come with questions. Remember these are interactive.

Please, note at events like these, power and bandwidth are limited to some degree. If you download the tooling before the event that will help relieve stress on the network. At some of the events we will not have enough power for everyone.  We ask that everyone share appropriately, and if you have multiple batteries it might not be a bad idea to bring it.

I hit some bad traffic on the way to the workshop (a 45 minute drive turned into a 75 minute fiasco) so I arrived just before the official start time of 9:00 on the agenda I got in email so I should only have missed breakfast. The speakers were already started when I arrived with some introductions.

One of the first things announced was a change in the agenda. No hands-on labs today, this was spun as good news since it meant we would get out early. So basically the first smackdown of the day is that everyone who spent hours downloading and installing software might have wasted their time. The agenda is also significantly different from the one I originally signed up for, and in my mind, not in a completely good way. Added is an introduction to HTML 5 (not a bad thing), gone are the Building a Site in WebMatrix and Migrating from WebMatrix to ASP.NET MVC. These are the two primary reasons I signed up. My initial thought was to leave, but at this point I was willing to give them the benefit of the doubt that the refinement of the agenda is based on the previous presentation feedback and maybe it was even better.

I knew this day had a dark cloud over it when the first thing they asked us to do is to go to the WebCamps Web site (built in a few hours with WebMatrix) to register for the daily drawings and the site would not come up. It had nothing to do with the Microsoft Internet access either as I was using my Verizon MIFI card. The site was broken (and later fixed so we could register). Bad omen.

HTML 5 discussion opened up the old Silverlight vs. HTML 5 wound from 2010. The explanation to clarify Microsoft’s position only seemed to muddy the waters with comments like (and I paraphrase here):

  • This is my opinion, not the official Microsoft opinion
  • There are things the evangelists are not being told that are being decided in Redmond.

What? Redmond is making decisions about future product development and they are not involving or telling the people who are the closest customer contacts they have in the developer community? Either I misunderstood the message, or it was purposely confusing so I would not understand the message. Either way, the message was sloppy.

If I was new to the Microsoft grinder wheel of deprecated technologies I would have walked out of that part of the discussion wondering what I had stepped into. I was hoping to hear from the discussion that I could go to a specific Web page on to read the official roadmap of Silverlight and HTML 5 and the Microsoft position. But anyone who knows Microsoft developer division knows you won’t get a straight answer on this. The speakers should have just stated this and moved on. Instead they wasted 20 minutes confusing the issue more.

The one clear thing stated and something that should be obvious to any developer is that there are no broad right answers. Each decision to implement technology is based on the circumstances of the project and what is available to help create the solution at the time it is developed. No one should be able to tell you that you should always use Silverlight, or always use HTML 5 without knowing all the requirements and resources (money, time, skills) available to the project team.

What is not clear to developers though is what Microsoft plans to support and what makes sense for developers to invest their training dollars and time learning. I walked out of this session with more confusion, not more clarification.

The first section of the day was JavaScript Fundamentals. All I can say is that the presenter was condescending, insulting, and obnoxious. Completely unprofessional. Examples crashed over and over. I have seen a number of sessions where the demo gods were not kind to the presenter, but this one was a fine example of what not to do when training new presenters. First of all this is not the first time this session was given. I was told it was the 11th stop of the WebCamp tour around the Microsoft Central Region. Second, these are Microsoft Evangelists doing the training. Their job is to learn the Microsoft technologies and then show developers this technology so we can adopt it. It is their job to show us how well it works so we have an “ah-ha” moment and start using it to build solutions for our customers. What we saw was a train wreck. Clark Sells either was not on his game Wednesday, was distracted by some external force, or is not competent in his job. This is not the first time I have said this about a Microsoft Developer Evangelist unfortunately. I have not seen Clark present before so I can only hope he was having a bad day.

As the day progressed he became less obnoxious, and fewer demos crashed, but when it comes to the signal-to-noise ratio, it was night and day between the time he was presenting and Brandon Satrom was presenting.

I provided a number of examples of Clark’s unprofessional techniques in the evaluation sheet I handed in so I won’t repeat all of them here, but my favorite was his offhanded comment about sites that support certain browsers with: “give you the middle finger and tell you to download Google Chrome.” While I know some people found his antics entertaining, I found they distracted from the material. I appreciate speakers who add humor to their session, but in this case the bad humor used was an attempt to mask the bumbling and fumbling through the presentation crashes, and it was in my opinion a disaster. Maybe not an epic disaster, but for me a complete waste of time listening to someone showing me why raw JavaScript is a pain to use, and why I should be using a supported framework like jQuery. For those that already understand the truth in this were bored and off surfing the net during the presentation. Those that did not understand this might not have learned it in the end.

I am the kind of developer who learns by doing. I can read until I am blue in the face, and I can watch others demonstrate things all day long, but those only reinforce in my mind what I am capable of doing some day. It is not until I actually sit down at the computer and do it that I actually learn it. So to me the loss of the hands-on workshop was a major disappointment. Granted, I did not have the software loaded, but my co-worker did and together we would have learned during this time.

I have seen a few jQuery introductions at conferences over the last year by Rod Paddock, Paul Mrozowski, and Steve Bodnar. Microsoft should just hire one of those three guys to give this portion of the WebCamp as they were 10x better than the session at the WebCamp.

The session on WebMatrix had so much potential. The product is quite interesting as you can start with template sites that leverage open source tools like WordPress, Umbraco, Joomla, Orchard, and Drupal. Unfortunately we never really saw all that much of how WebMatrix works, and how you would go about building a site other than the canned templates. I am sure there is a lot more to this product than what we saw.

During the MVC section the presenters built a podcast database site with some basic functionality. They showed how straight-forward it is build a site. What they did not do is migrate a site built by WebMatrix. They also stressed how close it was to Ruby on Rails. I believe Microsoft only built MVC to slow or stop the trend of developers moving from ASP.NET to Ruby on Rails. There is one point new to me I think is important to share. The presenter said that PHP was created at the time Microsoft moved from Classic ASP to ASP.NET and that the reason it was created was ASP.NET made it more difficult to develop Web applications initially. PHP is designed to be simpler like Classic ASP. Microsoft is recognizing the complexity of ASP.NET and is trying to make it easier to develop Web apps, and to get more Web developers to use their Web technologies. I found this a little enlightening.

I know I have been slanted on the negative side during this blog post, and I apologize for that since I try hard to look for the positive in everything and really believe in the “positive approach attracts positive results” philosophy. That said, I learned a few things during the six plus hours:

  • Modernizr looks like a cool tool for Web developers supporting HTML 5 on current and older browsers.
  • HTML 5 is not just about HTML markup.
  • WebMatrix has potential for developers learning the Web now.
  • Microsoft workshops will continue to disappoint me, but if I learn who is presenting in advance I can be more selective.

At least I walked away feeling it was a technical session and not a marketing session.

I also left with one big question: Why is WebMatrix and Lightswitch two separate products? Since they have a lot of similarities, why not one product with a build switch that selects the deployment for Web or desktop? I ask this question without a lot of experience in either tool. This is a casual observation from someone who has seen demonstrations and overviews of both products. I know WebMatrix is creating ASP.NET solutions and Lightswitch is creating a rich Silverlight experience. I know Microsoft likes to have different groups go off and develop products internally that will compete in the open marketplace, but to me I see more synergy than difference. Maybe they target a different level of end-user or developer? I think it would be cool if they shared the same metadata on the backend and would allow me to deploy one or the other generated solution depending on my needs. I don’t know. More to ponder before my next geek get together I guess.

Even though the WebCamp was “free”, as the owner of my business it cost me 12 hours or more of billable time to send two people. My coworker also spent five hours download, installing, and reviewing the lab materials – which were never used, and a couple hundred dollars in travel expenses out of my pocket. Not to mention the intangibles of the night away from her family and extra help she had to arrange to help her son while she was gone. The bottom line impact for this “free” workshop is financially significant for our small company and our employees. I feel it is important to budget for training for the staff, and the type of people we have thrive in a learning environment. What I really dislike is wasting this budget, which is exactly what we did this past week. Fortunately, next time I’ll be smarter.

I do want to retract one tweet, or at least alter it slightly. I originally stated:

I may be watching a train wreck in the making. Nope, definitely a train wreck. Possibly the worst presenter ever.

“Possibly the worst presenter ever” is flat out wrong. Back at a Microsoft DevCon I watched as a presenter spend an extraordinary amount of time navigating Open File Dialogs looking for files, and navigating menu pads looking for the correct menu item to demonstrate the topic at hand. Many of my blog readers remember the session well. The presenter never rehearsed, and might have made it up the night before for all I know. It was years ago and they remain in my mind as the worst presenter ever and worst conference session ever. And there are other sessions I have blogged about over the years where I have felt I wasted my time. Overall, Clark’s presentation was not worse, but could easily be ranked in the top 10 worse sessions I have had the time to sit through. {sigh}

And to balance it, Brandon was polished and his presentations went smoother. Now if I could only forget the HTML 5 vs Silverlight, and Web Forms vs. what ever is better or not discussion. {g}

So I hope this clarifies my tweet ramblings of frustration that my followers were reading. If you are not on Twitter and ran across this blog post I hope it provides you some insight on a developer’s experience with a workshop that went bad. If you are a Microsoft employee who is looking for feedback on your WebCamps, I believe I was as frank and honest in my paper evaluation as I was here, although this blog post gives you a lot more detail than the simple paper evaluation allows me to provide. I am sure there are others who saw this presentation way differently than I did. I know the people sitting around me were quite frustrated, but as I waited to talk with one of the speakers at a break near the end of the day I saw a lot of people hand in evaluations with high marks on the presenter scale. Different perspectives are important to the organizers of the event. I know that because I run conference and speak at several more each year.

Thanks for taking the time to read this blog post.


Speakers and sessions for Southwest Fox 2011 have been announced. The conference features four half-day pre-conference sessions and more than 26 main conference sessions in five tracks. Whether you’re still working only with Visual FoxPro or extending Visual FoxPro with other tools, you’ll have no trouble finding plenty of sessions to enhance your skills and widen your horizons.

As for our presenters, initially we have lots of Southwest Fox veterans like Menachem Bazian, Rick Borup, Steve Ellenoff, Tamar Granor, Uwe Habermann, Doug Hennig, Venelina Jordanova, Jody Meyer, Jim Nelson, myself, Eric Selje, and Christof Wollenhaupt. We also have three Southwest Fox freshman: Steve Bodnar, Kevin Ragsdale and Tuvia Vinitsky.  We are hopeful registrations will allow us to bring in additional speakers as well.

I am looking forward to sitting in on lots of sessions if time allows like last year. I am presenting a couple of new sessions:

1) How Craig Boyd Makes Me a Hero!

2) Programming Standards and Guidelines for Software Craftmanship

White Light Computing is a Platinum Sponsor again this year. We will have a booth to show off our developer tools and services again so please stop by.

You can follow us in Twitter: @SWFox. If you check out who @SWFox is following you will find our list of speakers who are on Twitter.

And there are still plenty of surprises up our sleeves (some we don’t even know ourselves yet) to entice you to come to the best Visual FoxPro conference in North America!

Please help us get the word out about the conference by yelling from the mountain tops. We certainly appreciate everyone who blogs, or records podcasts, or tweets, or Facebooks (is that the proper verb?) about their positive experiences at past Southwest Fox conferences. An email will be sent to everyone who has attended Southwest Fox in the past on June 1st. Send us an email if you are interested in getting on the list. (info [AT]

Registration opens June 1.

Only 152 days until we gather in Gilbert!


Putting on a conference like Southwest Fox takes an enormous effort. Each year I put in over 200 hours doing organizer tasks. Each year each of the organizers automate a little more of the effort to help reduce the number of hours we put in. For instance, the registration process the first year took close to 25 minutes per registration, and this year I am averaging close to 5 minutes for someone returning to the conference, and 7 minutes for someone new. Most of this savings comes from the electronic registration app I developed and delivered in 2009.

This year I am hoping to reduce the effort of recording the evaluations you give us. It is one of the most important tasks we take care of after the conference.  Naturally we are interested in what you have to say about the conference, and the sessions the speakers prepare and deliver.

During the conference post-mortem meeting the organizers divide up the evals in thirds and use a couple very efficient Visual FoxPro forms developed by Tamar to enter in everything you put on the paper forms. We do this mostly because we want to get this information to the speakers. We deliver the details and summaries to them in early November (at least this is the goal). It normally takes me a couple of evenings to enter in my portion of the evals.

The biggest drawback other than the time it takes to enter in the evals is the latency to get the feedback to the speakers. Understanding what you did right and wrong in your sessions would be way more useful if you got it before you give it a second time at the same conference. The paper approach we use does not allow for this type of feedback.

So in an effort to get feedback to the speakers quicker, to save the organizers a little time after the conference, and as a terrific learning experience for the development team at White Light Computing, I designed an online Evaluation site for Southwest Fox.

To make things really interesting we decided to use a lot of new technology so everyone on the team would learn something new. In fact, some of the technology is beta itself. Oh, and I did not cut the development team any slack at all by giving them the specs and mockups just a few short weeks ago. Heh, if we cannot make it interesting, why do it at all? :)

The core part of the site is already developed. I opened up a private beta testing cycle late last night and already this morning we are getting feedback. If you are interested in beta testing it, we might have a few invites to share with you in the next week or so. So please email me at info AT

If you are interested in how I designed the site please come to my Mocking the Customer session at Southwest Fox 2010 and German DevCon.

Please keep your fingers crossed that White Light Computing can pull this off with the help from the test team, and if you like it don’t be shy about letting us know how we did at Southwest Fox. If you don’t like it, let us know in a constructive way too. We really appreciate your feedback.

Only 18 days until we gather in Glibert!