2011 TechStars Startup Madness - Congratulations & Recap of the Tournament

Posted on by Jon Kelly (Jon)
URL for sharing: http://thisorth.at/23tg
14729
First, I'd like to congratulate Cheek'd, the winner of the inaugural TechStars Startup Madness tournament! The team at Cheek'd fought off 63 startup competitors over the course of 6 grueling rounds to emerge as our first champion. Cheek'd will receive over $25,000 in prizes to help build their business from Rackspace, TechCocktail, Seattle 2.0, SendGrid, Gluecon, Perky Jerky, MogoTest, AgileZen, StatsMixPro, Foodzie, and SnapEngage.

Congratulations are also in order to the rest of the final four contestants:
  • Finalist & Runner Up: Workables receives 1 year of SnapEngage Business + the semi-finalist prizes
  • Semi-Finalists: Punchd & WhoSent.It each receive an invitation to TechStars for a Day, consideration as a TechStars finalist, 1 full conference pass to Gluecon and 1 year of SnapEngage Basic
The final tallies for the tournament:
  • 212,348 pageviews on voting pages
  • 19,022 votes
  • 1,593 comments
The Nomination and Selection Process
The goal of the Startup Madness tournament was to identify a group of high-quality "under the radar" startups and to give these young companies the attention they deserve. Each participant had to be an Internet or software startup that has had less than $250K in total funding and less than $250K in revenue in a single year.

The first stage of the contest was a Twitter-based nomination process whereby companies were nominated by Tweets directed at the Techstars Twitter account. We had 319 nominated companies in just over a week's time. We pulled together the Tweet info (number of nominations, reach of those who Tweeted) along with information provided by the nominated companies themselves and passed it over to the TechStars team who diligently plowed through the entries to identify the 64 tournament participants.

I was so focused on the mechanics of the process that I didn't really have a look at the tourney entrants until I voted in the first round. I have to say that I was incredibly impressed. We had wonderful startups from all over the world who were executing against real consumer and business needs. I strongly encourage you to check out all of the entrants in Round 1 (click on the small box under PriceKnock to see the next contest).

The Tourney Set-Up
Hosting the inaugural Startup Madness tourney was not without its challenges, naturally. The primary issue we had to grapple with before we started was the voting rules. Being the first year of the tourney, we had to make tough choices about the rules without first-hand data upon which to base our choices. How many attempts would there be to create "fake" votes? How sophisticated would they be? I know there are folks in the tech community who think that "more is always better" when it comes to security. IMO, they're the reason we find so many non-banking websites that require 8 digit alphanumeric passwords that change every 3 months. In most cases, it's best to plan for security that matches the expected threat level. The example I like to use is that the security level provided at the U.N. Building in New York would seem silly (and seriously off-putting) at a mall in suburban Denver. Likewise, that mall's level of security would be absurd at the U.N.

We considered a number of different methods, with the choices largely coming down to a simple trade-off. Do we create lower barriers to vote, resulting in more participation from the startups' fans or do we create higher barriers to vote, with a resulting decrease in participation? Unfortunately, we lacked any first-hand data about what would happen once we launched the tourney. In the end, we decided that users could vote with Facebook Connect accounts or via accounts they created directly with This or That. At the time, we did not require that email addresses be verified nor did we require a CAPTCHA.

Unfortunately there was a lot more effort directed at the mass-creation of user accounts than we anticipated. We spent the better part of a day after Round 2 analyzing the voting data. We decided to require email validation for This or That accounts from that point forward. As expected, this slowed but did not stop people from creating accounts en masse to vote in the contest (as we also saw people set up multiple FB accounts to vote). And, as we discussed internally before the contest began, it only made creation of accounts more difficult, it didn't actually ensure that users were unique.

Vote Monitoring & Analysis
My personal background is in performance marketing - I was the President and co-founder of the Insurance CPC network SureHits. When the stakes are very high (for those unfamiliar, insurance clicks are among the most valuable on the web) you run across an incredible number of attempts at fraud. To ferret out fraud, we heavily used a version of the "Anna Karenina principle" popularized by Jared Diamond. Specifically, "good" insurance clicks from real consumers exhibit a lot of common characteristics. The characteristics can differ by traffic source (e.g., newsletter vs. paid search vs. banners), but there are a lot of similarities within each type. While those attempting fraud could pretty easily guess the data available to us, they had an extremely hard time seeing what the good clicks actually looked like across a lot of variables. This information asymmetry created an opportunity to catch most attacks very quickly.

We had a very similar situation here. While it should be obvious to most knowledgeable technologists what data we could collect on the votes and the voters, it would be hard to know what the rest of our data set looked like. As it turned out, like insurance clicks, most of the votes shared a lot of common characteristics. Of course, there were different categories of good votes - there were clear patterns when one of the contestants sent out an email encouraging customers to vote, when one had a party where voters used a limited set of computers over a small timeframe, etc. But generally speaking, clear patterns emerged.

We felt it would have been irresponsible to say this during the contest, but we ended up taking a very conservative approach to eliminating votes. Most of the concern we heard from the participants was that legitimate marketing efforts or foreign IPs might be the target. In reality, we focused on cases that were clearer cut. For example, in the Final Four, almost all of the nullified votes came from one block of 81 voters. Here are the characteristics of these votes: 1) Same IP, 2) votes averaged 1 minute apart with low variance, 3) all used the same exact user agent, 4) all were ToT accounts (zero FaceBook logins), 5) all were freemail accounts, 6) almost every username (78/81) was a subset of the email username, the remaining 3 being only slight variants. The unusual nature of that data set is convincing on its own. But, to say these votes were "outliers" would be a major understatement. Using just one of the 6 metrics, the lack of FB log-ins was far, far outside the norm (45% overall).

There is one extremely important point I need to make here. As we saw in earlier rounds, this group of accounts voted on both of the semi-final contests in exactly the same way. Those who are familiar with the SEO concept of Google Bowling (pointing spammy links at competitors and then complaining to Google to have the site banned) know that it isn't right to assume that someone has broken the rules just because you run across activity that would benefit them. It's impossible to determine the origin of these votes and silly to assign blame to any one of the participants.

Results
Let me say openly that the voting process created some real problems for everyone involved. For the startups, there was a tremendous amount of effort required to "get out the vote." While I think (hope) that this turned into a good use of time to build awareness and engagement, I hate to think we added a big distraction to the startups in promoting themselves through each round. And then, when each round ended, they had to suffer the agony of waiting through the vote verification process, with concerns about whether their efforts to get out the vote somehow tripped a filter in our process.

For the voters, I think this created a situation similar to professional bicycle racing or sprinting. You don't really know who won the race until long after it's over. That's a pretty big letdown. I hesitate to add this, but it was also pretty far from ideal for us. We'd much rather be building new features on ToT or building awareness for the contest than pouring over voting spreadsheets.

Frankly, we were caught a bit off guard by the interest in the Startup Madness tournament, but we learned a tremendous amount from it. We'd like to thank everyone who Tweeted, blogged or emailed us about the tourney, both with problems and with praise. We especially appreciated the detailed bug reports and great suggestions we received to improve the site and the tournament. I'd like to specifically thank Matt Curry, who posted detailed suggestions on PseudoCoder and Zack Kim, who provided us with a video of voting problems he was having on a Mac. This really helps us make the product better. We are already seeing much better results from our re-designed registration and sign-in process, for example. And, we have a bunch of ideas to improve the contest next year.

Conclusion
In spite of the voting challenges, we loved hosting this tournament. We'd like to thank David Cohen, Nicole Glaros and the entire TechStars team for their efforts in making it happen and for letting us be a part of it. I personally loved learning about so many amazing small businesses and interacting with their founders throughout the contest. And, in the end, I believe we accomplished our primary goals. Sixty-four of the most interesting and inspired "under the radar" companies have had thousands of real users (including many investors) check out their products, vote, comment and otherwise interact with their businesses. That's awesome!

Debate It! 1

BTW, how would you identify genuine tweets? I mean, when a company or a brand tweets for itself from different ids and IPs, how would you find it out and who would be selected?

Posted By timesheet,

Make a Comment

You must be signed in to add a comment. login | register
Username
view profile
You are now following
a
You are no longer following
a
 
test message
×