Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

January 07 2014

Gartner: 2.5B PCs, Tablets And Mobiles Will Be Shipped In 2014, 1.1B Of Them On Android
dodobirdAs the worlds of communication and technology increasingly become mobile-first, the traditional desktop PC continues to go the way of the dodo bird. The analysts at Gartner are today publishing their annual devices forecast, and they project 2014 to be another banner year for mobile — and its biggest player, Android. There will be 2.5 billion PCs, tablets and mobile handsets shipped this year. Of those, 1.9 billion will be mobile handsets, and 1.1 billion devices will be Android-based. That represents an overall rise of 7.6% over 2013, with all of the growth coming from mobile devices like handsets, tablets and “ultramobiles” (Gartner’s preferred term for those devices like Samsung’s Galaxy Note that are not-quite-tablets). Desktop and notebook PCs will decline to 278 million units, Gartner says. Generally speaking, this is a big improvement over 2013, a year that saw only 3.8% growth. To give some context of Android’s rise — and of what role “mobile” plays in the device landscape — Google’s OS in 2014 will account for nearly 45% of all device shipments. In 2013, it was 38%. In 2015, Gartner projects it will be nearly 48%. The tipping point is nigh, and if you’ve ever wanted proof of how Google is today’s Microsoft, this is one starting point. The same kind of domination is being played out in terms of the installed base. Among existing device owners, Android clocked 1.9 billion devices in use in 2014, while Apple had 682 million across both iOS and Mac, according to principal analyst Annette Zimmerman. Looking at the installed base is important for a number of reasons: it indicates which device makers and platforms have the most dominant ecosystems. And this, in turn, will have a knock-on effect not only in terms of where developers will put their resources, but also where consumers making repeat purchases are likely to put their money. Returning customers and brand/platform loyalty become ever more essential in saturated markets. Gartner does not break out specific manufacturers, but Ranjit Atwal, a research director at Gartner, confirmed to me what you may have already guessed: just as Android dominates mobile, Samsung dominates Android. That could, he believes, lead the company closer to developing its own, customized operating system, similar to what Amazon has done with its Fire OS. (Indeed, Samsung’s recent developer’s conference, with new SDKs for its different range of devices, seems to point in this direction,

November 14 2013

Gartner: 456M Phones Sold In Q3, 55% Of Them Smartphones; Android At 82% Share, Samsung A Flat Leader
Last year, Strategy Analytics made the prediction that we may have approached "peak Android" for how big a market share the operating system may be able to attain in the U.S. market. Fast forward to today, and it's clear that Android is continuing to grow worldwide -- although its biggest OEM, Samsung, may be the one that has reached a ceiling of sorts. Figures out today from Gartner indicate that in Q3, Android accounted for nearly 82% of all smartphone sales in the period, and while Samsung has continued to remain in the lead, it is doing so with a flat marketshare of 32%.

July 15 2013

Forrester: $2.1 Trillion Will Go Into IT Spend In 2013; Apps And The U.S. Lead The Charge
Forrester has now released its annual look at the state of IT spend globally, and the analysts project that there will be $2.06 trillion invested across software, hardware, and IT services by enterprises and governments in 2013. Within that, U.S. will be the biggest-spending country by a long shot, and -- in a sign of the times -- apps will be the single-biggest spending category of all.

July 31 2011


Healthcare Disruption: Providers Are Making Newspaper Industry Mistakes (Part III)

Editor’s note: This guest post was written by Dave Chase, the CEO of, a health technology company that was a TechCrunch Disrupt finalist. Previously he was a management consultant for Accenture’s healthcare practice consulting to 25 hospitals and was the founder of Microsoft’s Health business. You can follow him on Twitter @chasedave.

Since the latter half of the 90’s, the handwriting has been on the wall for newspaper companies that media’s future was digital. Heck, the newspapers’ own business sections reported on this trend. Despite this, the majority of the industry focused on traditional strategies such as taking on debt to acquire other newspapers or investing in new printing presses, leading to disastrous consequences.

To be fair, there were some digital investments made, including hiring top-drawer talent. However, over time, the digital teams were marginalized and ultimately the talent that had the capability to transform these organizations left for opportunities where their hands weren’t tied. In other words, the commitment wasn’t deep enough to effect a true transformation.

Now consider healthcare in the U.S.: There’s a clear understanding that the industry must shift its focus towards outcomes from “do more, bill more” orientation. If ever there was an industry that should understand that it’s more effective to address underlying conditions than treating the symptom, it should be healthcare. Or, as a famous early newspaper publisher stated, “an ounce of prevention is worth a pound of cure.” Prevention-focused countries such as Denmark have dramatically lowered the need for hospitals. Once at 155 hospitals, they are at less than a third of that today. I find this easily-known fact is news to healthcare providers I speak with.

Whether they don’t know these facts or are ignoring them, the fact is there are incredibly large capital investment projects on the docket for many health systems. Since 62% of hospitals are mission-based, non-profit organizations, it’s astonishing that they are more focused on capital projects than addressing the overall health of their communities. No one has made the case, for instance, that chronic conditions that consume 75% of the $2.6 trillion tab in the U.S. is best addressed by building more buildings. Some make the case that there’s a growing healthcare real estate bubble while costs of chronic conditions continue to expand.

In healthcare, it’s as though we are building better firehouses and investing in more firefighting equipment while we do the equivalent of leaving oily rags around, letting kids play with fireworks on dry hillsides, and building structures with one exit. We may have the best “firefighting” tools and talent in the world but we’d be much better off if we prevented those “fires” from starting in the first place.

Dr. Ted Epperly recently finished his term as the head of the American Academy of Family Physicians and runs the Family Medicine Residency of Idaho program, which includes 80 physicians serving over 20,000 patients. On a tour of his facility, he stopped to comment on the scene in the waiting room of their biggest clinic, something that’s typical of the many doctor’s office waiting rooms we’ve all experienced. He described the scene as a failure compared to the vision of what he’s planning on implementing.

In Epperly’s vision, he describes a dashboard that pulls from the registry of all of their patients. Rather than reactively waiting for someone to present himself or herself in the clinic, he envisions a system that proactively is monitoring the array of conditions his patient population experiences. For example, it will ensure diabetics are having regular foot and eye exams and blood glucose levels are being consistently monitored. If someone hasn’t scheduled an appointment already, it will proactively reach out to him or her rather than waiting for some health crisis.

Epperly has been a leading proponent of a concept in healthcare called the Patient-Centered Medical Home (PCMH), akin to the philosophy that Denmark has adopted so successfully. In many respects, the PCMH is simply an updated version of the Marcus Welby model of medicine with more of a team-based model coupled with technology.

While some may have noticed that there’s several PCMH pilots that were included in the federal health reform law, there’s a little-noticed facet of the law the CTO for the United States, Aneesh Chopra, points out in this video segment. That is, if the payment models that reward positive health outcomes over activity proves out in the eyes of the Actuary for Medicare and Medicaid program to be cost savings, there is carte blanche authority to expand these models broadly to entire Medicare population. This could rapidly expand the deployment of the PCMH concept and accelerate the need for the associated HealthTech. The video below explains this in more detail and explicitly speaks to the opportunity for startups.

Another healthcare provider plans to send home patients with an array of personal biometric devices. The output of these devices will be a more complete view of an individual’s health. There’s an explosion of personal biometric devices ranging from personal blood pressure monitors to some being built into clothing and widely deployed in places such as Denmark.

For the cost of a small wing of one of these new Taj Mahal structures, healthcare providers could have a team of innovators working on scenarios such as those described above and many others. Those that avoid sticking to the old tried and true methods of differentiation that worked in the past will be light years ahead as the transformation of healthcare takes hold. If they don’t, employers who are paying the bulk of healthcare costs are taking matters into their own hands and building their own onsite clinics.

Whether the innovation comes from within or from non-obvious competition such as employers or pharma companies, there’s a distinct advantage in having a blank slate where cost effective systems and models of delivering care can be delivered. For the providers, they’d be well advised to develop their own innovation teams unfettered by the current model so they can develop models that will ensure the provider’s long-term survival.

If you missed the first parts of the series, you can find them at the links below:

July 30 2011


Three Companies Chi-Hua Chien Of Kleiner Perkins Would Love To Invest In

Today at Aol West Headquarters, a number of entrepreneurs, VCs, and executives gathered to discuss the state of the mobile industry and mobile technology. After a series of individual panels, the day concluded with the crowd of panelists gathering together for a lively discussion about the future of mobile, current mobile trends gaining legs, as well as what’s missing. Chi-Hua Chien of Kleiner Perkins stepped in to give an example of what’s missing in the industry by sharing three particular business models that he’d like to see make their way into the space.

In a prior panel, Chen, Skype investor Howard Hartenbaum, and Tango founder Eric Setton, spoke about how closing the “redemption loop” is becoming one of the most important goals in the daily deals space, specifically on mobile. (Something TC’s Erick Schonfeld talked about in a post earlier this week.) Chen pointed out that one of the big goals is to forge a future where a customer can walk into a store, and the merchant will immediately know who they are and what they want — and that someday soon Twitter and Foursquare will be acting in a way akin to a CRM platform for businesses to help make that happen.

But, as to the three companies that Chen wants to see, and invest in, for starters, he envisions a killer mobile company offering a completely automated personal assistant — something he said really wasn’t “something you couldn’t do before mobile”. He cited the example of one having dinner reservations with a friend who lives, say, 30 minutes away. The user’s mobile device, thanks to location awareness, knows exactly where they are and how far away they are from the restaurant. What’s more, thanks to the fact they made their reservation on OpenTable, the automated assistant will know exactly what time they planned to meet.

But, based on the fact that you’re 30 minutes away from where you’re having dinner, and tapping into a traffic app, they know that there’s congestion on the way. It then might send out an alert to the person you’re having dinner with, or can, in an automated way, message both people to confirm that they’d like to push the reservation back by 30 minutes, make that change, and close that loop with no effort.

Part of what’s making that possible now, he says, is the very existence of mobile, but it’s also thanks to the maturity of the platforms that are now being accessed by maturing APIs. The automated personal assistant addresses a need set that couldn’t be solved in an asychronous environment on a desktop.

Secondly, education is a trillion dollar market “that’s completely screwed up”, because it involves millions of children going to sit in a classroom for 7 hours, and it combines three different businesses for the state: the real estate business, the union labor management business, and certification business.

When, in reality, education should be delivered in a realtime basis to students who are learning at their own pace, who don’t have to sit in a room full of 30 people in an antiquated environment — a realtime, mobile solution that’s learning based as opposed to curriculum based. This second idea is a bit more nebulous, but Chien is hitting on an important theme here: How badly American education is in need of disruption and innovation, especially as that would relate to mobile.

The third model Chien alluded to was health and fitness. “We all wish that we could lose ten pounds”, he said, and now there’s a device in your pocket that can seamlessly manage its owner (personal assistant theme again), encourage the user to exercise, eat healthier, whatever the case may be. It can truly manage the place at which you are paying attention to your health, your exercise regimen, and helping you to lead a healthier lifestyle.

There’s a huge need here, Chien said, something that never could have been tackled in a PC environment, simply because the overhead of checking a website every day (as opposed to a mobile device that’s portable and always with you) is just unsustainable. It knows what you’re eating, what the caloric intake of that food might be, can advise you against consuming that third ice cream cone, and can tell your heart rate after a 5 mile run. When one combines that with display information designs and notifications optimized for a mobile setting — well, it’s enough to make an entrepreneur water at the mouth.

Afterwards, Schonfeld asked Chien if these were actually three stealth startups that Kleiner Perkins had recently invested in, to which Chen laughed and said, no, but if there are companies out there making these products, Kleiner may very well be interested.

“And those aren’t just dinky features … those are companies”, Chen said. “Those are companies attacking trillion dollar markets.”

I also kept hearing a theme of automation in what Chien talked about, and clearly, at least in his mind, (though I think it’s in the minds of many others as well), that automated processes, whether they be customer service, healthy living, or retail processes, are going to be big not just because we’re lazy, but because they help us focus on doing the things we love.

July 28 2011


British Court Orders ISP To Block Filesharing Website In Potential Landmark Ruling

The issues of censorship, net neutrality, and file sharing will be kicking for years to come, and the necessity of making the relevant laws agree internationally will be by no means a small part of the conflict. But those laws have to be reasonable and scalable to begin with. Today brings a development from the UK, where a judge has determined that BT must use its Cleanfeed censorship technology, intended for blocking child pornography, to prevent its subscribers from accessing the file sharing website Newzbin2.

It seems that even the Pirate Bay defense (moving your servers to a secret cave) will be ineffective in this case. As I wrote before regarding the need for an alternative DNS: when lobbyists and short-sighted legislators start cutting off certain sources at whatever choke point seems convenient, that’s nothing short of a slippery slope.

The very nature of the suit is suspect to begin with. In the introduction to the ruling, it is stated:

In these circumstances, the Studios contend that the only way in which they can obtain effective relief to prevent, or at least reduce the scale of, these infringements of their copyrights is by means of an order against BT (and thereafter the other ISPs) of the kind now sought.

While this is likely boilerplate in part, the idea that this is the only way they can “obtain effective relief” is only acceptable to the laziest of investigators. It’s a sign of the times that such a large and influential organization can not only contend that with a straight face, but have it pass without comment in a judge’s written opinion.

The abuse of a tool made for a very specific and justified purpose shows just how unscrupulous the MPA is in their actions. Cleanfeed, administrated by the Internet Watch Foundation, works to “minimise the availability of… child sexual abuse images hosted anywhere in the world” — it’s a scalpel, not a mallet. But The MPA specifically requests it be used as a means to prevent access to content of their choosing. If they were serious about doing things right, they would be reaching across the aisle, or whatever it is you reach across in the UK (the gap?), to come up with long-lasting and correct adjustments to law and enforcement capabilities. Of course, the legislation we’ve seen on our side of the pond isn’t exactly promising — but at least they haven’t started lobbying for pirates to be entrapped on “To Catch A Predator.”

Anxious to please the court, the MPA also magnanimously acknowledged the existence of DNS, IP, and DPI based methods of detecting and intercepting these rogue packets.

Even if the websites they ask to be blocked could be proven to be in violation of law (a fine point considering the nature of NZBs, which, like torrents, do not and cannot in themselves contain copyrighted material), why is it left to monolithic private entities like the MPA to make that determination?

The goal of this lawsuit is ostensibly to reduce losses from piracy. The MPA (Europe, America, or other) maybe stupid, but it’s not that stupid. They know as soon as they cut off one head, two more will appear to take its place. They know that even if they were to shut down the top 20 providers of quasi-illegal content like torrents, it wouldn’t affect the numbers one iota. This operation, like most of their legal operations, is to determine the extent to which they can turn private grievances into public ones. They’re just flexing their muscles.

I don’t want to seem unduly harsh on the judge, one Honorable Mr Justice Arnold; his opinion is quite thoroughly researched and a great number of precedents and existing European law are cited. Unfortunately his judgment simply leans in the direction of the plaintiffs. For instance, on the important but subtle issue of whether it is Newzbin2 or the BT subscriber who is doing the infringement, he simply rejects the idea that the subscribers are the first and final infringers. It’s not an ignorant conclusion (like a few we’ve seen stateside), just an unfortunate one that increases overall liability and muddies the issue:

Once it is concluded, as I have, that the users are using BT’s service to infringe copyright, then it follows that the operators [of Newzbin2] are too… The operators make the works available in such a way that users can access them over BT’s network (among others). In my judgment that is sufficient to constitute use of BT’s service to infringe.

In other words, according to this judgment, every ISP is liable for the actions of every website or service accessed by their subscribers. Apparently it is sufficient to show that a user is using a service in a way that is illegal. Certainly there’s the aspect of the issue that the vast majority of the content on Newzbin2 is copyrighted material. But it’d be incredibly easy for them to upload 10 public domain items for every copyrighted one, immediately invalidating the statistical analysis cited by the MPA. “If users prefer copyrighted files, what business is that of ours?” Newzbin2 might reasonably ask. “We provide a service that tracks these files, that’s all, and charge for the oversight of our editors, who are not concerned with the nature of the content they organize.”

BT actually objects on grounds like these, first with the objection that if the judge grants this injunction, the plaintiffs will immediately seek duplicate injunctions against other ISPs and other sites. To this the judge says that while that may be the case, it’s not material to this case. Hard to argue with that, strictly speaking, as this guy is clearly guided by the letter of the law here, but a lack of concern for precedent is partially what got us into this mess in the first place.

BT also contends that the block will be ineffective, saying the users will easily circumvent the Cleanfeed block or whatever is put in place. The judge acknowledges that the tools and expertise to circumvent the system are readily available, but says: “Even assuming that they all have the ability to acquire such expertise, it does not follow that they will all wish to expend the time and effort required.”

Really, now. That’s a bit optimistic, in my opinion. The objection is substantial: the injunction sought would be ineffective. Dismissing it by saying these technically proficient people won’t bother with that trivial changes necessary to get around these restrictions is just pigheaded, and he drinks the plaintiffs’ expert kool-aid in accepting that the shutdown of The Pirate Bay was an effective measure. The evidence supporting this position is of the flimsiest quality, while the burden of proving that all the infringers have the ability, desire, and time to circumvent the measures weighs heavily on BT.

BT and the MPA will reconvene in court in October to work out how the nuts and bolts of how blocking will work. Newzbin2 has responded to the ruling, but they don’t really bring anything new to the case. BT is not appealing the decision, and who can blame them?

The comments of Peter Bradwell, from digital rights organization Open Rights Group, ring disturbingly utopian in this age of lowered expectations from the powers that be:

If the goal is boosting creators’ ability to make money from their work then we need to abandon these technologically naive measures, focus on genuine market reforms, and satisfy unmet consumer demand.

He continued: “You may say I’m a dreamer, but I’m not the only one. I hope someday you’ll join us – and the world will live as one.”

Here is the full ruling:


A Look At The Size, Shape And Growing Threat Of Malware Networks [Infographic]

Blue Coat Systems, the provider of web security and speed optimization solutions, released a mid-year web security report earlier this month, which, among other things, examined the current state of malware ecosystems, and detailed the growing size and reach of malware delivery networks.

Malware and malicious software have been around for years, but malware networks are becoming increasingly dynamic and continue to wreak havoc on search engines, email, and everything in between. No, my computer has not been infected by visiting this site, and, no, I will not download your antivirus software, Malware bot.

Larger malware networks have begun swallowing smaller malware entities, and they’re now serving up their web landmines at astonishing rates. Apple even seems to have reached the tipping point, with enough market share that malware networks have begun targeting Apple OSes. It’s not quite the “explosion of malware on Macs” many forecasted, but it’s still a much larger problem than it was a year ago. And it’s not just desktops and laptops that are affected, malware has gone mobile, too. Android appears to becoming more vulnerable, as security firm, Kaspersky Lab, identified 70 different malware on Google’s mobile OS in March.

Hide yo wife, hide yo kids, etc.

Building on top of Blue Coat’s midyear report, Chris Larsen, a senior malware researcher, has put together a nifty little infographic detailing the shape and heft of the malware ecosystem and what areas in particular pose the biggest threats. Larsen told me that, as one might expect, if you’re a malware provider, you want to be where the crowds are, setting your traps in the most highly trafficked areas of the Web.

He also said that the most common form of malware is the invitation to download fake antivirus software, but there’s also the age-old “Take this survey!” malware, and the or that which comes disguised as a PDF or office document file. And users can be infected by malware or spam without even downloading a file, Larsen says, as a form of drive-by downloading makes it possible to ply your browser for vulnerabilities and dive in when they see the opportunity.

According to Larsen and team’s research, search engines have become breeding grounds for malware. And though Google does a good job of identifying poisonous text links, image search is currently “the most dangerous activity” one can engage in on the Web. Part of the problem is that the design of Google’s image search is such that you may be clicking on an image cached by Google that is coming from one of a malware network’s many phony websites. You’ve already clicked through to the image before you know you’re cooked.

Malware networks don’t traditionally come with names, as one might expect, but Larsen said that the security industry has now been tracking the biggest malware offenders for long enough that they’ve been able to identify trends. Traditionally, he said, malware has been identified by particular attacks (and named accordingly), but the reality, he said, is that some networks have grown so large that they have their hands in many different scams at once.

They might be gaming you on Twitter, offering you fake antivirus software in a Google image search, and trying to sneak into Apple OS X through the backdoor all at the same time. Blue Coat has begun employing a naming system for the top malware networks, using plays on mythical tricksters to give these malicious networks an identifier.

And they need names, because these networks are fast, and they’re slippery. The average number of unique host names per day for the top 10 malware delivery networks is 4,107, and an average of over 40,000 users make unwitting requests to malware networks each day. With the highly covered attacks Lulzsec and Anonymous have made in recent months using DDoS attacks and simple SQL injections, the vulnerability not only of the average web user to malware, Trojans, and viruses, but high profile networks and websites has been pushed to the fore as well.

It should be noted that we need to be careful of taking an alarmist stance (just when you thought it was safe to back in the water!); we don’t exactly need one more thing to worry about in our daily web activities, but it is important to be aware of the areas of the Web that malware networks are targeting as entry points. Many of us have had our own Facebook or Twitter accounts hijacked by link-disseminating malware — or at least know someone who has. Shoppybag anyone?

What’s more, Symantec released its own intelligence report today that this new form of rapidly changing malware is leading to a rise in sophisticated, socially-engineered attacks. In terms of spam, the report found that the global ratio of spam in email traffic rose to 77.8 percent, an increase of 4.9 percentage from last month.

Symantec also found that an average of 6,797 Web sites each day harbor malware and other malicious programs, an increase of 25 percent from last month.

For more, check out the infographic below:

Excerpt image courtesy of MaximumPC.

July 27 2011


Solve Media Is CAPTCHA-ing 620K Type-In Ads A Day

Ads are everywhere: They’re in our content, they’re online and offline, they’re on buses, billboards, and more. I’ve been told just to “get used to it” — that the advertising proliferation is only going to continue — so I’m thinking of selling some space on my forehead. Might as well take advantage. Of course, those who are successful in digital advertising are generally those who can find these to non-traditional areas to use as advertising space, especially if they prove more effective than typical display (or banner) advertising. Late last year, a young startup called Solve Media began taking ads into a new niche digital territory by reformulating … CAPTCHAs.

Yep. You heard that right. “Captcha” boxes, for those unfamiliar, are those prompts that require users to input an odd array of letters and numbers so that the ticket vendor from which they’re buying tickets (for example), can be sure that the user is not some kind of evil spam robot. Thus, Solve Media’s unique approach is to use its “TYPE-IN” platform to replace those fuzzy words and numbers often used in puzzle-based CAPTCHA systems with a simple logo, or a brand message in quotes — along with a simple input box. Users type in what brand they see, or answer the given question, for example, and then proceed on their merry way.

It’s an interesting approach and has actually tested quite well. For the most part, consumers are far less likely to struggle with Solve Media’s CAPTCHAs than they are with those annoying fuzzy puzzle CAPTCHAs. On the other hand, publishers get to employ a security buffer (when needed), and advertisers get guaranteed views and impressions for their brands. In fact, according to Solve Media CEO Ari Jacoby, consumers who saw a Solve Media type-in ad engaged 29 percent of the time during June of this year — far higher than traditional engagement in other forms of advertising.

Compared to banner ads, in particular, which are nearly universally disliked and rarely clicked on, Solve Media’s type-in model is seeing a much higher level of engagement. And, for the consumer, at least interacting with a Solve Media ad gets them somewhere.

Of course, the fact that a user is basically forced to interact with Solve’s ads would seem a deterrent, but according to Jacoby, so far there have been few complaints. Because, in short, typical CAPTCHAs really are a pain in the ass, and Solve’s type-ins save the user time in comparison. It’s much faster to type in Pepsi than it is to figure out what kind of devil-speak a traditional CAPTCHA is asking you to regurgitate.

Furthermore, in regard to its new approach testing well: Since launching in September of last year, Solve Media has attracted more than 2,000 publishers and more than 75 advertisers. Advertisers, including names like Toyota, Microsoft, Universal Pictures, AOL, and Tribune are now using the platform.

Solve Media saw nearly 15 million successful type-ins across its platform in June, which, according to Jacoby, would be the equivalent of serving 15 billion banner ads — assuming the industry average CTR of 0.1 percent. And considering the average for rich media is about 3 percent, that’s still a significant lift.

What’s more, Solve is also averaging 620,000 type-ins per day in July (up from just over 500K in June), and is growing 15 percent month-over-month, according to Jacoby. And the network has grown over 460 percent since launch. It seems they may be onto something.

In fact, IHG, which owns Holiday Inn, ran an advertising campaign in May and June on Solve’s platform (for its so-called “Vacation Pay”) that delivered a 122 percent lift in brand awareness and 15-times the average CTR, according to comScore.

Solve is growing like gangbusters, and the startup really seems to have hit on a valuable niche space that supports advertising and guarantees brand interaction in ways other solutions (and forms of advertising) can’t. For more, check out Solve’s video below, and tell us what you think.

July 26 2011


Next MacBook Pros To Feature Air DNA?

The Sandy Bridge update to the Air line has been enough to make some feel the lightweight laptop is ready for prime time (I’m convinced, personally), but it’s still not enough for some. MacBook Pro users are accustomed to more storage, more screen real estate, and a greater number of ports. If rumors are to be believed, the best of both worlds might be on its way, with Air-style design making its way to the Pro line.

The sources are obscure, referred to only obliquely (MacRumors “has learned” and TUAW is “hearing”), so take this with a grain of salt. But even sans source, it makes sense, while leaving room for plenty of speculation. What will the compromises be?

For one thing, the optical drive will almost certainly be eliminated. Luckily for Apple, that particular item is not a priority among their users. iTunes and the Mac App Store are popular, and Apple Stores themselves are increasingly bereft of boxed software.

Furthermore, Thunderbolt presents an extremely easy way to add a high-speed peripheral. No optical drive? No problem: $100 external drive operating with no loss of speed compared to the wired-in original.

Storage is a bit more complicated. As popular as streaming solutions are, local storage is still very important for editing media, something Apple has been pushing on consumers hard with the iLife suite — though pros may be jumping ship after the poor reception of Final Cut X. 256GB of flash storage is nice, but people want terabytes. Yet 2.5″ laptop drives are still too thick to include in an Air-like body. Or are they? Some laptop drives with a terabyte of space are coming in at under 10mm thick. You couldn’t fit that at the sharp end of the Air but there might be room for a 2.5″ right at the top right edge, where the optical drive would go. Apple is happy to customize PCBs to optimize space.

Only a spinning HDD would mean a performance hit, though. So I’m guessing they’ll ship with a hybrid SSD-HDD volume, with system, applications, and temp kept on SSD and bulky media kept on HDD. It’s not so hard to segregate data like that, and as Apple likely can’t commit to only one storage type or another, they’ll have to do something interesting with both.

Let’s not forget the ports. Apple will want to push Thunderbolt, and while shipping with only that port might be a daydream of theirs, my guess is they’ll go with two Thunderbolt and two USB ports. 2.0 or 3.0? I wouldn’t put it past Apple to limit their USB capabilities in order to make Thunderbolt accessories more enticing. And as other have pointed out, Thunderbolt-connected I/O hubs will handle USB 3.0 speeds with ease, though the ports on the new Cinema Display are indeed only 2.0.

A holiday release (as suggested by TUAW) would prevent them from adding a next-gen processor (coming in 1Q12), but to complement the Air styling they’ll probably want to market long battery life as well, and switching from the old Core i7s to Sandy Bridge might have been enough for Apple’s purposes. Want the latest hardware? Get a PC (I did).

So: no optical drive, hybrid SSD/HDD storage, and 2xTB, 2xUSB ports (and an SD slot and Ethernet, of course). Maybe something more for the 17″, but I don’t think these specs sound at all unlikely.

July 25 2011


Healthcare Disruption: Providers Will Use HealthTech to Differentiate and Produce Better Outcomes (Part II)

Editor’s note: This guest post was written by Dave Chase, the CEO of, a health technology company that was a TechCrunch Disrupt finalist. Previously he was a management consultant for Accenture’s healthcare practice and was the founder of Microsoft’s Health business. You can follow him on Twitter @chasedave.

Historically, in the U.S. Healthcare system, a primary way to differentiate oneself as a healthcare provider has been to have impressive physical assets such as newly built clinics/hospitals/wings and medical equipment. This is logical when the legacy reimbursement model has incentivized activity (procedures, tests, prescriptions) instead of positive health outcomes. Anything that can be done that will create more activity creates more billing opportunities.

However, the DIY Health Reform movement has recognized the flawed fee-for-service reimbursement model has been responsible for healthcare’s hyperinflation. Some of the most interesting healthcare provider startups such as MedLion, National Surgery Network and One Medical Group are using IT rather than expensive equipment/facilities to differentiate themselves and affordably deliver superior health outcomes.

In Part I, Healthcare Disruption: Pharma 3.0 Will Drive Shift from Life Science to HealthTech Investing, I discussed how Health IT has primarily been applied to administrative functions such as claims processing rather than core clinical functions such as decision support. In contrast, today’s innovative healthcare providers recognize the changing dynamic of healthcare requires a fundamental rethink of the customer experience as has happened in many, many other industries.

With one third of the workforce being permanent freelancers, contractors, consultants and entrepreneurs, individuals are compelled to directly buy healthcare rather than rely on their employer as they have in the past. The percentage of people directly buying their own healthcare will approach 50% as more employers opt out of providing health benefits as they get priced out (most have already reduced the percentage of the health premium they cover). Thus, consumerism is beginning to pervade healthcare like never before. In response, the aesthetic of providers’ websites matters much more. For example, the website of Benchmark Capital-backed One Medical Group would make Philippe Starck proud (see screenshot below).

Of course, it needs to go beyond aesthetically pleasing websites. Whereas technology has historically brought incremental administrative efficiency in healthcare, organizations such as Qliance and OneMedical have utilized technology for radical transformation. It’s no coincidence that they are backed by the founders of Amazon, aQuantive, Dell, Expedia, and venture firms such as Benchmark — all organizations that used technology to disrupt entire industries. Qliance evaluated 240 different U.S. based electronic medical systems before rejecting them as too billing centric rather than patient and health outcomes focused. Instead Qliance is creating competitive advantage by custom developing their software systems using off-the-shelf and custom-built software.

The aforementioned organizations are all disruptive startups bringing dramatically lower costs and better outcomes. What’s going on at the large health system level where there’s greater complexity including legacy processes and systems? Let’s look at one example from the heart of Silicon Valley — Stanford Hospital & Clinics.

    This was our opportunity to come up with ideal ways of working, not simply to replicate our very poor processes when we put in the new systems—because that’s just a really expensive copy machine…

    At some level, the adoption of a comprehensive EMR system is no longer optional for a major medical center like Stanford. However, Tabb is the first to agree that the organization must do more than absorb this significant expense as a cost of doing business. While Stanford has traditionally maintained a “product leadership” position within the medical community (i.e., for providing leading-edge, high-quality care), competitors such as Kaiser Permanente are seeking to disrupt the status quo by offering interesting new care paradigms that leverage the EMR as their foundation.

Stanford Hospital & Clinics is implementing a traditional health IT system from one of the leading Health IT vendors — Epic Systems. Epic is appropriately named as, by all accounts, the implementations and cost are also epic. All the Epic projects I’ve heard about are eight and nine figures for the cost of the software and implementation. The scale of projects sound similar to the early days of CRM where it could only be implemented by very large organizations. The market leader for CRM was Siebel and those projects regularly ran seven to nine figures (reportedly 10 figures in the case of Epic’s Kaiser implementation) which is obviously out of reach for small and medium sized organizations.

Disruptive pricing didn’t come from Oracle or other large client-server vendors to extend this important category into smaller organizations. Instead it came from cloud-based dramatically bringing costs down. Perhaps more interestingly, Salesforce created an open ecosystem inviting 3rd party developers to address the wide array of customer requirements for particular job functions and industries. Healthcare has a similar diversity of conditions and communities that will necessitate a 3rd party ecosystem. I would predict that as the closed nature of legacy client-server CRM systems created an opportunity for Salesforce, the legacy client-server Health IT vendor systems are similarly closed and will create opportunities for modern, open architectures from a new generation of tech startups.

By definition, the legacy systems have been optimized around the flawed fee-for-service model that pervades healthcare today. In contrast, the disruptive new care and payment models that are exploding around the country require a new ecosystem of technology platforms. Out of necessity, the new healthcare delivery models have demanded custom built software but this should change as those models reach critical mass. A market for off-the-shelf software for the next generation of HealthTech will develop.

A pharma executive explained to me the need to focus more on health technology. “Based on the ‘patent cliff’ in healthcare and the need for continued R&D, promotional budgets are becoming tighter; technology offers a less expensive way of interacting with our customers. Simultaneously, while many physician offices have been reticent to adopt technology, the incentives being put before them are now changing their perspective on technology….pharma companies have an opportunity to take advantage of this.” She went on to explain how that “no see docs” (i.e., physicians generally barring pharma reps from meeting with them) may be more open to new technologies delivered through reps that can help achieve better outcomes.

The scale of the plans for new business models emerging from major pharma, health product/device and health plan organizations will have these previously complementary organizations increasingly competing with each other. Perhaps more interestingly, they will begin competing with the very healthcare providers they have offered their products/services to. The notion of coopetition is familiar to those in tech but will likely become a term that is no longer foreign in healthcare. Just as we’ve seen Media become more like Merchants, I’d expect we’ll see healthcare suppliers acting more like providers. We’ve already seen healthcare providers become health plans.

Newspapers provide a cautionary tale for healthcare providers. It was the non-obvious competitors that has cratered the newspaper businesses. In Part III of this Healthcare Disruption series, I will draw parallels between the behaviors I observe today with health systems and the behavior of newspaper companies in the second half of the 90′s. Consider that the byproduct of Denmark’s shift to a focus on outcomes over the last couple decades has resulted in half of their hospitals closing as they are simply not needed anymore.

Healthcare providers must reinvent themselves or they’ll meet a similar fate to the Denmark hospitals that are now closed. A key part of their reinvention will be enabled by a new generation of technology solutions.

July 24 2011


Closing The Redemption Loop In Local Commerce

When it comes to local commerce, the ultimate prize everyone is going after right now is how to close the redemption loop. The redemption loop starts when a consumer sees an ad or an offer for a local merchant, and is completed when the consumer makes a purchase and that purchase can be tracked back to the offer. If you know who is actually redeeming offers and how much they are spending, you can be much smarter about tweaking and targeting those offers.

Groupon, LivingSocial, and other daily deal sites have created enormous value by pushing the redemption loop the furthest. When someone buys a daily deal, for instance, that translates into cash for the merchant. But for the vast majority of their deals Groupon and LivingSocial do not track whether or not they are ever redeemed, much less the amount each consumer actually spends at the store or restaurant once they show up.

In order to complete the circle and track offers all the way through redemptions, it is necessary to either tap into the payment system or create an alternative way to track redemptions. Different companies are tackling this problem in different ways, but they almost all rely on a shift from emailed coupons to offers delivered through mobile apps.

Next Jump CEO Charlie Kim, who recently partnered with LivingSocial to power daily deals across his commerce network, sees a shift in targeting from broadcasting deals to narrowcasting them. “Blasting out a deal to everyone in New York is not targeting,” he says. “When you broadcast too much in any category, it is just a lot of noise. Email response rates have plummeted for everyone across the industry. What used to be 10% response rates even a year ago, now you are talking the 1% to 2% level.” The constant barrage of emails from Groupon, LivingSocial, and every daily deal copycat is creating user fatigue that is visible in declining response rates.

And that is why mobile is so appealing. If you can send deal notifications to people’s phones based on their exact location and nearby deals, you have the beginnings of narrowcasting. Later on, companies will figure out how to layer on ways to target by income, gender, and other factors as well.

Mobile and local commerce go hand in hand. In a few cities, Groupon is testing out Groupon Now and LivingSocial is offering Instant Deals. In both cases, the deals appear on mobile apps and can be redeemed instantly, rather than having to wait a day for the deal to go live, as is the case with their regular daily deals. The downside of these deals is that Groupon and LivingSocial cannot take advantage of their existing deal inventory and they have to actually provision participating merchants with iPhones and iPads so that they can accept the deals and Groupon/LivingSocial can track them. Yelp is doing something similar where you have to show a redemption code to the merchant from your phone.

Foursquare and Facebook are taking a different approach through their separate partnerships with American Express. Since AmEx is the payment system, it records deal redemptions along with the actual payments. Merchants and consumers don’t have to do anything different from what they normally do. Pay with a credit card and your deal is redeemed. Except it only works if you have an AmEx card and the discount is credited to your account later.

Google is trying to link Google Offers to its Google Wallet, which requires an NFC chip in your phone and an NFC reader at the merchant’s checkout. It has the advantage of working with MasterCard, Citi, and other large payment processors. But it also depends on a brand new technology that will take a long time to become widely available.

The key to closing the redemption loop is definitely payments. Investor Chris Sacca recently told Kevin Rose in a video interview the best reason why Twitter should buy Square is because Twitter has the broadest reach to distribute offers and deals, and Square has a built-in way to track redemption. This was just an off the cuff remark in a friendly chat (Twitter isn’t even in this business yet), but it makes sense.

We are moving from a world of online ads that produce impressions and clicks to online and mobile offers that produce real sales. If the deal companies can figure out a way to actually measure those sales, it could open up local commerce in a massive way that makes what they’ve done so far look like child’s play.


The Winklevosses Vs. Silicon Valley

The Winklevoss twins had their original case against Facebook dismissed yesterday, causing tech media to write another slew of “The Winklevosses’ Case Against Facebook Is Over But Wait Actually It Isn’t” headlines. The seven-year battle is indeed not over, as the Winklevosses intent to file a motion under Rule 60b, which alleges that the Facebook withheld evidence during the first trial that could have been used in the settlement, evidence which would increase the value of the Winklevoss Facebook shares to around $200 million (which is about $200 million more than I or probably any of you have).

This news comes shortly after former Harvard president Larry Summers called the twins “assholes” at the Fortune Brainstorm tech conference in Aspen, in response to a question about the veracity of a scene in “The Social Network.”

“One of the things you learn as a college president is that if an undergraduate is wearing a tie and jacket on Thursday afternoon at three o’clock, there are two possibilities. One is that they’re looking for a job and have an interview; the other is that they are an asshole. This was the latter case,”

The twins responded to Summers’ comments by writing an official-looking letter to the current president of Harvard, condemning Summers’ actions, which only reinforced the “asshole” characterization for many.

Because wading through piles of legalese isn’t something that I (or you) can spend most of my time doing, for better or for worse, I don’t understand the ins and outs of the case. But, thanks to “The Social Network” and the simplification engine that is popular culture, the story of the Winklevoss twins versus Mark Zuckerberg is not about the fight over the minutiae of a breach of contract in most people’s minds; It’s about the battle of two archetypes, and the two parties have come to symbolize the two sides of the “execution” (Zuckerberg) versus “ideas” (the Winklevosses) debate.

Any entrepreneur worth the ramen will tell you that ideas are a dime a dozen; “Startup ideas are not million dollar ideas,” wrote Y Combinator founder Paul Graham “and here’s an experiment you can try to prove it: just try to sell one.”

Larry Summers’ comment about the Winklevosses being assholes because they wore suits to a meeting appeals directly to the Silicon Valley myth of a bunch of dude wearing hoodies and TEVAs drinking Mountain Dew in a house in Palo Alto while they casually build the next billion dollar company. I’m betting that’s no accident; Summers has slowly made inroads into Silicon Valley, is on the board of Square and is an advisor at Andreessen Horowitz. Moreover Summers actually met Marc Andreessen (who is on the board of Facebook) through his protegé, Facebook COO Sheryl Sandberg.

Sorkin’s masterful “If you guys were the inventors of Facebook, you’d have invented Facebook,” line pretty much sums up the collective ethos of an industry that’s seen Friendster replaced by Myspace replaced by Facebook and lived through “RIP Good Times” to only see more good times. Welcome to Northern California, we don’t like old money and we don’t like patent trolls. “Hustle over entitlement” should be our state motto.

Indeed, when I asked why Silicon Valley had such vehement feelings about what the twins symbolize on Twitter and yes Facebook, I was overwhelmed with similar responses, “The valley is filled with people who were misfits in school in a world filled with Winklevosses,” said Redpoint VC Satish Dharmaraj said on Twitter. “The Valley knows that the idea was not ground breaking or new. Execution is everything. + some luck,” he said.

“It really feels like two jock assholes tried to take money from the little nerdy guy,” said valley veteran John Adams, “The right thing for them to do would have been to start-up a competitor to Facebook and not call sour grapes the whole time.”

Menlo Ventures partner Shervin Pishevar gave a longer explanation on Quora,

“If the Winklevii had spent all their time and energy competing with Facebook in the arena of the marketplace rather than in the confines of the courtroom, we here in Silicon Valley would have had more sympathy and respect regardless of whether they had failed or succeeded. If you want our respect, gear up and enter the arena of entrepreneurship and be willing to die and battle for your idea to win the hearts and minds of those in the stands. The users who vote with their time, money and passion count here- nothing else. No court order or settlement can give you the legitimacy and honor that hundreds of millions of users can. Merit matters more. Always.”

Summers’ character in The Social Network also sums it up, “Harvard undergraduates believe that inventing a job is better than finding a job. So I suggest again that the two of you come up with a new new project … The two of you being here is wrong! It’s not worthy of Harvard, it’s not what Harvard saw in you. You don’t get special treatment.”

I’m sure hundreds if not thousands of entrepreneurs silently cheered at that part.

Movies are really good at caricatures, but, as the leaked Zuckerberg IMs show, reality has many facets — the Winklevosses are Olympians (which is an accomplishment is it not?) and I’ve met plenty of assholes wearing hoodies. And as we brace ourselves for the second round of Winklevosses’ versus Facebook there’s one thing that’s clear, whatever the fight is about it goes way beyond money, especially for all of us on the sidelines.


Healthcare Disruption: Pharma 3.0 Will Drive Shift from Life Science to HealthTech Investing (Part I of III)

Editor’s note: This guest post was written by Dave Chase, the CEO of, a health technology company that was a TechCrunch Disrupt finalist. Previously he was a management consultant for Accenture’s healthcare practice and was the founder of Microsoft’s Health business. You can follow him on Twitter @chasedave.

Healthcare’s hyperinflation is driving the transformation of how care gets reimbursed resulting in a massive disruption in healthcare. For example, pharma companies will succeed or fail based not on how much drug they sell, but on how well their market offerings improve outcomes.

As the largest spenders on R&D in healthcare, massive changes in the way pharmaceutical companies operate are going to have a profound effect on health technology while letting pharma adapt to marketplace changes. It is creating opportunity for startup businesses that heretofore have been stymied when trying to make inroads into healthcare.

In the past, I have frequently said that healthcare is where tech startups go to die. A combination of factors ranging from risk aversion to entrenched legacy vendors exerting account control to health IT not being viewed as a source of competitive advantage for healthcare providers has made it difficult for promising new companies to make a dent. In this three-part series, I will lay out the most important dynamics transforming the opportunity for health technology startups.

In Part I, I will highlight how “Pharma 3.0” will drive a shift from traditional Life Science to HealthTech investing. In Part II, I will outline how healthcare providers will use HealthTech to differentiate and produce better outcomes. I’ll wrap up the series laying out how many healthcare organizations are on a path to repeat mistakes the newspaper industry made beginning in the mid-90’s. There are remarkable parallels that both spell peril for the incumbent healthcare providers if they repeat the newspaper companies’ mistakes and create massive new opportunities such as those I outlined earlier in pieces about The Most Important Organization in Silicon Valley No One Has Heard About and Hotwire for Surgery.

Pharma 3.0 Will Drive Shift from Life Science to HealthTech Investing

E&Y has produced industry reports for the Pharmaceutical industry that provide a comprehensive look at pharma’s history to outline its present condition. E&Y interviewed scores of innovators and senior executives to outline out a vision for what they call “Pharma 3.0.”

The following is an excerpt from their nearly 100 page report entitled “Progressions – Building Pharma 3.0” (read the full report here):

    The Progressions report identifies several industry trends driving nontraditional companies into the sector, including health reform, health IT, comparative effectiveness, and the rising confidence in consumer power. These factors and others are prompting pharmaceutical companies to broaden their focus from producing new medicines to delivering “healthy outcomes” – a shift that will be driven through creative partnerships and business model innovation.

    The Progressions report describes the rapid transition from the industry’s long-standing vertically integrated blockbuster-driven model, defined in the study as pharma 1.0, to today’s pharma 2.0 business model. Under this business model, companies have adopted a number of changes to improve productivity and financial performance, from pursuing more targeted therapies, broadening their portfolio of products and capabilities, to establishing more independent and flexible R&D units, to boosting partnerships with biotech firms, and universities and outsourcing many non-core functions.

    The report finds that even as pharmaceutical companies continue to implement strategies to prosper in pharma 2.0, these efforts may be overtaken by a pharma 3.0 “ecosystem” comprised of established industry members, nontraditional companies and an increasingly informed data-empowered consumer.

During my years working in health systems and hospitals, I rarely crossed paths with the pharma industry even though we were ostensibly serving the same organization. The only time I saw pharma reps was noticing well-dressed folks in the cafeteria that were clearly the pharma reps. My time was spent in the IT and Patient Accounting departments where much of Health IT was relegated. Whereas Health IT was viewed as a cost item to be minimized, pharma and Medical Device products represented revenue generation and differentiation opportunities for healthcare providers.

In the flawed fee-for-service model that has driven healthcare’s hyperinflation, financial rewards incentivized activity (order a test, prescribe a drug, do a procedure, etc.) rather than positive health outcomes. For example, there are 60MM CT Scans done per year in the U.S. despite the fact that there isn’t a radiologist in the world who believes anywhere near that volume is required. Nonetheless, we incur that high cost and excess radiation in the fee-for-service that is the underpinning of the legacy reimbursement model. Fortunately, there’s a sea change to change reimbursement to reward positive health outcomes over mere activity. In addition, electronic medical records are helping reduce duplicate tests.

Based on my past (non)experience with pharma, it has been remarkable the number of pharma companies that are now proactively reaching out to software companies who can help them enter with new services focused on outcomes that have little or nothing to do with what I would traditionally associate with pharma. Their strategies are varied and dynamic but they aren’t sitting on their hands. For example, one shared how they recently entered into a 10-year agreement to be responsible for the end-to-end health of a population of individuals with a particular disease. As I will touch on in the 3rd part of the series, this will have a profound effect on the competitive landscape for traditional health providers. Not many healthcare providers are prepared for this type of competitor.

Like it or not, healthcare is like most arenas – revenue (aka reimbursement) drives behavior. pharma has been extremely adept at maximizing revenue in the fee-for-service environment. As one pharma exec said to me, “We have spent billions on developing and marketing our product but $0 on ensuring it is properly used.”

As pharma companies strive to be a “health outcomes” industry, the focus on outcomes will radically alter their behavior. They recognize the competitive threat. As the E&Y report stated, “Pharma companies have expanded the number of pharma 3.0 initiatives by 78% since 2010. Yet non-traditional players have invested even more in Pharma 3.0.” The sense of urgency with the pharma organizations I’ve met with is remarkable.

To date, IT investment in healthcare has been mostly limited to the administrative/reimbursement facets of healthcare (e.g., claims processing). The primary exception is the software that has been embedded into medical devices with little or no ability for the clinician to interact with the device. There’s a contrast with where money is actually spent in healthcare – i.e., healthcare delivery versus therapeutics as outlined in a piece by angel investor and life science veteran Don Ross in his piece “Investor: Health tech is the next big opportunity”.

How big is the healthtech opportunity? Data from the Centers for Medicare & Medicaid Services (CMS) show that the U.S. spent $2.5 trillion on health care in 2009. Of this, 84 percent was spent on healthcare delivery, which includes costs associated with clinicians and insurance companies. In contrast, only 16 percent was spent on therapeutics, including medical devices and drugs. Although venture investors traditionally have put their money into therapeutics rather than delivery, the balance is shifting.

As Pharma companies recast themselves as “health outcomes” companies in response to anticipated reimbursement shifts, one can expect that venture capitalists and Pharma/Biotech will shift their investment focus from almost exclusively Life Sciences to integrated approach with Health Tech and an outcomes. Areas such as decision support, care coordination, patient engagement, etc. become paramount if one is going to address outcomes versus simply encouraging more activities that the legacy reimbursement model have incentivized.

Increasingly the very survival of the pharmaceutical industry is predicated on creative alliances with nontraditional players such as IT companies. No longer will healthcare be where tech companies go to die for the startups with transformative products that may have languished in the past. The very survival of one of the most profitable industries in the world depends on it.

July 23 2011


Ouch: The Netflix Price Change Hangover

It’s been pretty fascinating to watch Netflix’s growth from a company that Blockbuster laughed at in 2000 (when Founder and CEO Reed Hastings and former CFO Barry McCarthy proposed to Blockbuster management that they run its online brand) to the single largest source of web traffic in North America in 2011.

There have been quite a few hiccups and ups and downs along the way, as the on-demand video provider has struggled with Hollywood studios, succeeded as leadership has pushed its service onto TVs, game systems, and mobile devices — and more recently, re-focused on its streaming business.

Last week, that tweak to the business model saw a very public revision of the service’s pricing structure, a result of Netflix eagerly dividing its DVD rental and streaming services into two distinct businesses. In addition, Netflix created a whole separate management team for its DVD business, along with announcing that it would be offering its streaming plan for $7.99 and its DVD plan at $7.99, so that customers that want both will now have to pay $16 a month — a 60 percent price increase over its previous plan options. (For those who choose both streaming and physical.)

And, as you may have heard, customers were not happy. No, they were not happy at all. In fact, on the blog post in which Netflix announced said pricing changes, over 12,000 comments were posted (and that’s using Facebook’s commenting system, something TechCrunch readers are unhappily familiar with), most of them angry, and many in turn did their own announcing, saying they would be tendering their resignations, effective immediately.

Of course, but, so what? Well, according to YouGov’s BrandIndex, in the ten days since Netflix made its price changes, the national perception of Netflix’s brand among adults dropped precipitously from a 39.1 on July 12th to -14.1 on July 18th, and currently sits at -6, putting Netflix in a virtual tie with Blockbuster. With a margin error of 5, that’s no tiny aberration.

BrandIndex calculated it’s score by asking Netflix, Redbox, DirecTV and Blockbuster customers about their impressions of each brand, and what they’ve heard about the brand via word of mouth, advertising, etc. BrandIndex Global Managing Director Ted Marzili told me that the scores reflect a sample size of about 15,000 respondents. Look out, graph below.

Of course, it wasn’t long before Blockbuster was courting potential Netflix defectors with a 30-day free trial. And though it doesn’t seem that Blockbuster has been reaping rich rewards from Netflix’s change, the company’s stock, which has performed very well over the last two years (and was at a 6-month high before the announcement on July 13th) has since dropped over 20 points. Some of that is natural — the stock was due for a slow-down — and of course some of it’s not.

According to GigaOM, Morgan Stanley has also stepped in with its own Netflix survey, which found that, in fact, 26 percent of Netflix customers would be canceling their subscriptions altogether. These numbers have since been toned down, as the initial emotional angst wears off, but either way it seems likely that Netflix will be suffering in subscription revenue in the near future.

Netflix has now forced many of its customers to make a choice between streaming and DVDs, because, after all, as Reed Hastings himself told Erick Schonfeld back in May, the future is in streaming, not in them plastic discs. I mean who uses CDs anymore, ya know?

There is always a backlash when a major service hikes prices, but it seems that Netflix might have employed a bit more market research beforehand, and perhaps if it had given current users some form of incentive over its new users, whether through discounts on a year long plan or not, might have been smart. (And in turn shown a little consideration for the loyal customers.)

What about a discount on a 2-for-1-type deal? After all, we’re living in the age of the daily deal, when consumers seem to expect a discount. Not to mention the fact that many wallets have undergone a significant squeeze over the last two years. Money is tighter than it was during the company’s early days, and Netflix might benefit from acknowledging that.

We’ll see how this all plays out. I expect Netflix’s brand perception and stock will be right back on track before long, but there’s always the chance that the hangover continues. And, perhaps more importantly — don’t laugh — will Blockbuster truly benefit as a result? How about Redbox? The local library?

Your thoughts?


Doubts About Lytro’s “Focus Later” Camera

I’ve been meaning to address this Lytro thing since it hit a few weeks ago. I wrote about omnifocus cameras as far back as 2008, and more recently in 2010, and while at the time I was more interested in the science behind the systems, though it appears that Lytro uses a different method than either of those.

Lytro has been slightly close-lipped about their camera, to say the least, though that’s understandable when your entire business revolves around proprietary hardware and processes. Some of it can be derived from Lytro founder Ren Ng’s dissertation (which is both interesting and readable), but in the meantime it remains to be shown whether these “living pictures” are truly compelling or something which will be forgotten instantly by consumers. A recent fashion shoot with model Coco Rocha, the first in-vivo demonstration of the device, is dubious evidence at best.

A prototype camera was loaned for an afternoon to photographer Eric Chen, and while the hardware itself has been carefully edited or blurred out of the making-of video, it’s clear that the device is no larger than a regular point-and-shoot, and it seems to function more or less normally, with an LCD of some sort on the back, and the usual framing techniques. No tripod required, etc. It’s worth noting that they did this in broad daylight with a gold reflector for lighting, so low light capability isn’t really addressed — but I’m getting ahead of myself.

Speaking from the perspective of a tech writer and someone interested in cameras, optics, and this sort of thing in general, I have to say the technology is absolutely amazing. But from the perspective of a photographer, I’m troubled. To start with, a large portion of the photography process has been removed — and not simply a technical part, but a creative part. There’s a reason focus is called focus and not something like “optical optimum” or “sharpness.” Focus is about making a decision as a photographer about what you’re taking a picture of. It’s clear that Ng is not of the same opinion: he describes focusing as “a chore,” and believes removing it simplifies the process. In a way, it does — the way hot dogs simplify meat. Without focus, it’s just the record of a bunch of photons. And saying it’s a revolution in photography is like saying dioramas are a revolution in sculpture.

I’m also concerned about image quality. The camera seems to be fundamentally limited to a low resolution — and by resolution I mean true definition, not just pixel count. I say fundamentally because of the way the device works. Let me get technical here for a second, though there’s a good chance I’m wrong in the particulars.

The way the device works is more or less the way I imagined it did before I read Ng’s dissertation. To be brief, the image from the main lens is broken up by a microlens array over the image sensor, and by analyzing (a complex and elegant process) how the light enters various pixel wells underneath the many microlenses (which each see a slightly different picture due to their different placements), a depth map is created along with the color and luminance maps that make up traditional digital images. Afterwards, an image can be rendered with only the objects at a selected depth level rendered in maximum clarity. The rest is shown with increasing blur, probably according to some standard curve governing depth of field falloff.

Immediately it must be perceived that an enormous amount of detail is lost, not just because you are interposing an extra optical element between the light and the sensor (and one which simultaneously must be extremely low in faults and yet is very difficult to make so), but also because the system fundamentally relies on creating semi-redundant data to compare with one another, meaning pixels are yielding less data for a final image than they would be in a traditional system. They are of course providing information of a different kind, but as far as producing a sharp, accurate image, they are doing less. Ng acknowledges this in his paper, and the reduction of a 16-megapixel sensor to a 296×296 image (a reduction of some 95.5% of the pixel count) in the prototype is testament to this reducing factor.

The process has no doubt been improved along the lines he suggests are possible: square pixels have likely been replaced with hexagonal, the lenses and pixel widths made complementary, and so on. But the limitation still means trouble, especially on the microscopic sensors being deployed to camera phones and compact point and shoots. I’ve complained before that these micro-cameras already have terrible image quality, smearing, noise, limited exposure options, and so on. The Lytro approach solves some of these problems and exacerbates others. On the whole downsampling might be an improvement, now that I think of it (the resolutions of cheap cameras exceed their resolving power immensely), but I’m worried that the cheap lenses and small size will limit Lytro’s ability to make that image as versatile as their samples — at least, for a decent price. There’s a whole chapter in Ng’s paper about correcting for micro-optical aberrations, though, so it’s not like they’re unaware of this issue. I’m also worried about the quality of the blur or bokeh, but that’s an artistic scruple unlikely to be shared by casual shooters.

The limitation of the aperture to a single opening simplifies the mechanics but also leaves control of the image to ISO and exposure length. These are both especially limited in smaller sensors, since the tiny, densely-packed photosensors can’t be relied on for high ISOs, and consequently the exposure times tend to be longer than is practical for handheld shots. Can the Lytro camera possibly gain back in post-processing what it loses in initial definition?

Lastly, and this is more of a question, I’m wondering whether these images can be made to be all the way in focus, the way a narrow aperture would show it. My guess is no; there’s a section in the paper on extending the depth of field, but I’m not sure the effect will stand scrutiny in normal-sized images. It seems to me (though I may be mistaken) that the optical inconsistencies (which, to be fair, generate parallax data and enable the 3D effect) between the different “exposures” mean that only slices can be shown at a time, or at the very least there are limitations to which slices can be selected. The fixed aperture may also put a floor on how narrow your depth of field can be. Could the effect achieved in this picture be replicated, for instance? Or would I have been unable to isolate just that quarter-inch slice of the world?

All right, I’m done being technical. My simplified objections are two in numer: first, is it really possible to reliably make decent photos with this kind of camera, as it’s intended to be implemented (i.e. as an affordable compact camera)? And second, is it really adding something that people will find worthwhile?

As to the first: designing and launching a device is no joke, and I wonder whether Ng, coming from an academic background, is prepared for the harsh realities of product. Will the team be able to make the compromises necessary to bring it to shelves, and will those compromises harm the device? They’re a smart, driven group so I don’t want to underestimate them, but what they’re attempting really is a technical feat. Distribution and presentation of these photos will have to be streamlined as well. When you think about it, a ton of the “living photo” is junk data, with the “wrong” focus or none at all. Storage space isn’t so much a problem these days, but it’s still something that needs to be looked at.

The second gives me more pause. As a photographer I’m strangely unexcited by the ostensibly revolutionary ability to change the focus. The fashion shoot, a professional production, leaves me cold. The “living photos” seem lifeless to me because they lack artistic direction. I’m afraid that people will find that most photos they want to take are in fact of the traditional type, because the opportunities presented by multiple focus points are simply few and far between. Ng thinks it simplifies the picture-taking process, but it really doesn’t. It removes the need to focus, but the problem is that we, as human beings, focus. Usually on either one thing or the whole scene. Lytro photos don’t seem to capture either of those things. They present the information from a visual experience in a way that is unfamiliar and unnatural except in very specific circumstances. A “focused” Lytro photo will never be as good as its equivalent from a traditional camera, and a “whole scene” view presents no more than you would see if the camera was stopped down. Like the compound insect eye it partially mimics, it’s amazing that it works in the first place, and its foreignness by its nature makes it intriguing, but I wouldn’t call it a step up.

“Gimmick” is far too harsh a word to use on a truly innovative and exciting technology such as Lytro’s. But I fear that will be the perception when the tools they’ve created are finally put to use. It’s new, and it’s powerful, yes, but is it something people will actually want to use? I think that, like so many high-tech toys these days, it’s more fun in theory than it is in practice.

That’s just my opinion, though. Whether I’m right or wrong will of course be determined later this year, when Lytro’s device is actually delivered, assuming they ship on time. We’ll be sure to update then (if not before; I have a feeling Ng may want to respond to this article) and get our own hands-on impressions of this interesting device.

July 22 2011


Qunar Baidu Deal Closes. How This Could Ripple Through Chinese Startups.

Chinese search giant Baidu’s investment in Qunar officially closed  this week, and the two teams are working on integrating Qunar’s search results into Baidu’s travel vertical now. I caught up with Qunar’s CEO Chenchao Zhuang for his first Western interview about the deal to talk about what the deal means for the company and China’s Web scene broadly.

This $306 million deal is not only a big one in dollar amount, but it could represent a seminal turning point in the Chinese Web. Until now, Chinese Web companies have had a strong bias towards going it alone. Much like earlier eras in Silicon Valley, success is considered to be an IPO not an acquisition. But there’s a lot of pressure on that to change, and most industry watchers feel it’s just a matter of time– and of course, price.

For one thing, the Chinese Web is so crowded, and the haves and have nots are incredibly lopsided. Distribution is becoming an expensive challenge and tie ups with companies like Baidu or Tencent can represent massive synergies and upsides for good, profitable Web companies having a hard time breaking out into the mass market.

And the heady prices of some of the Chinese giants will put pressure on the companies to show new areas for growth, something that gets harder and harder for these companies by the simple law of large numbers. Meanwhile those big Chinese Web giants just have staggering amounts of cash. Investor pressure is going to mount to do something with it, and as the markets stay volatile for Chinese Internet IPOs, those deals are going to start to look more attractive….at some point.

Qunar is taking a step half-way in that direction by taking the investment from Baidu, and Zhuang expects more companies could start following its playbook.

Qunar is by all accounts doing well: It’s been profitable since last year, its growth is at least doubling every year, and it is still on the IPO track. It’s raised just $27 million over six years and only spent $6 million of that, focusing on profits and revenues from the beginning like many Chinese Web companies.

But online travel is still a niche market in China. A deeper look at the market shows why calling Ctrip the “Expedia of China” or Qunar “the Kayak of China” is an inherently flawed analogy. It also shows why Qunar needs the power of Baidu’s distribution more than it really needs Baidu’s cash.

Only 8% of people book travel online, versus 35%-50% in the US, depending on what category of travel we’re talking about. And unlike the early days of online travel agencies in the US where an oligarchy of sites divvied up the market, Ctrip consumes the bulk of China’s online travel pie. What’s more: Ctrip isn’t really a great analog to Expedia or Travelocity because 70% of its business goes through call centers; only 30% of its business is truly booked online.

Qunar started out in 2005 to be the “Kayak of China,” but has substantially changed its model, because the online inventory just isn’t there yet in China the way it is in the US. “The airlines just started their online initiatives recently and the percentage is still low,” he says. “Most hotels do not have a central reservation system yet. There is no legacy system the way there was in the US.”

Not only did Qunar have to painstakingly build its own complex database of deals, routes and prices, but one of Qunar’s main businesses is giving hotels and airlines a software-as-a-service booking engine so that they’ll actually have the kind of results to aggregate and search that Kayak was able to take for granted. “It’s painful, but it gives us a more and more unique barrier to other entrants,” Zhuang says.

The flipside to the pain is a massive greenfield opportunity to build better travel engines and consumer experiences from the ground up in China. When modern Internet companies are the ones giving hotels and airlines the reservation engines, they’re likely easier to connect with online systems. “Sometimes being behind is a good thing. We have no legacy systems and we can do everything from scratch,” Zhuang says.

But calling Qunar an OpenTable for travel isn’t quite right either because it doesn’t charge for the software. It doesn’t make money off of transactions fees either.  Eighty percent of its revenues come from pay-per-click ads, and another 15% comes from display ads. This may be the biggest distinction from other online travel players: Essentially Qunar is a media company, not an ecommerce or software company at all. Like TripAdvisor, it has aggregated thousands of consumer reviews to help consumers make decisions. The company makes money off of advertising against that content and its search results.

Given that, the deal with Baidu makes a bit more sense to Western eyes. There’s no media company on earth that can’t benefit from a torrent of eyeballs from a major portal. (Even a laggard. Just ask TechCrunch, post AOL deal.) No doubt startups in the same profitable-but-needs-more-distribution phase will be watching this partnership closely to see if those always promised synergies are actually there.


More Evidence There’s No Bubble: VC Investments Were Flat in Q2

Dow Jones VentureSource released its second quarter numbers for the venture industry today, and there’s a reason they’re not dominating the headlines. They’re pretty boring: Overall investors put $8 billion into 776 deals in the US in the second quarter, a decrease of 5% in terms of invested cash and 2% in terms of deals. The median amount raised per deal was $5.2 million, up from $4.6 million a full year earlier. Yawn, right?

But the fact that the numbers are so unremarkable is what makes them interesting. It reinforces what people like me have been arguing for months: A handful of hot companies does not a bubble make.

The venture business has always been an outrageously lopsided one: 95% of the returns come from 5% of the deals. But while that was still true in the late 1990s, the overall numbers soared astronomically. That’s what happens in a bubble: A rising tide lifts all boats.

That’s clearly not happening here. In the public markets: LinkedIn has come down in price from it’s highs, but held on to a healthy price around $100 a share, as you’d expect from a 10-year-old, still growing company with few market comps that didn’t float many shares to begin with. Pandora is trading around $18 a share, closer to its 52-week low than its 52-week high. A smaller issues like Zillow has cooled off dramatically since it’s huge opening pop, but about 30% higher than it’s initial pricing. And again, Zillow is a pretty mature business. There’s some crazy volatility in the early days of these stocks, no question. But there doesn’t appear to be a broader market impact from any of them, and they each have quickly settled into a more rational price-territory. Not what you see in a bubble.

Let’s look at the secondary markets: Most of the attention still goes to the big five or six social media names. The real opportunity for this market to take off is creating liquidity for the companies “below the fold”– so to speak. Companies that have built solid $100 million in revenue or so businesses that are too small to go public and have employees who need some liquidity. There’s just no raging speculation there; no middle-America grandma buying shares in these names. Indeed, many of these companies are just now trying to wrap their heads around how they could best use the secondary markets to their advantage. This is why the secondary markets remain a pretty small phenomenon in the world of finance.

And now, we’ve got new numbers from the venture world that back up the same sentiment. When you dig a little deeper, the point is made further. Dealmaking in the healthcare space is down 12% and the capital invested is down 17%. Investments in biopharmaceuticals where decimated with a 25% deal drop; investments in medical devices were flat. Software-related companies were a relative brightspot in healthcare but deals were only up by 5%.

Likewise, the cash going into clean tech took a nose-dive in the second quarter. The sector raised $556 million for 29 deals, less than half the cash that 30 clean tech companies raised in the second quarter of 2010. This in the year John Doerr predicted cleantech would finally start to produce those Netscape-moment-like IPOs. Doerr is one of the smartest investors in the industry. You’d think in a raging bubble, his prediction would have been easy to prove true. Instead, the category looks colder than ever. Many VCs seem to be wondering whether the cynics were right back in the early-to-mid-2000s when they said that cleantech is too capital intensive and long-term to payoff in a modern venture capital ecosystem dominated by the instant gratification of the consumer Web.

Even enterprise software– another sector that’s supposed to be hot– had a slight dip in activity. One hundred and twenty-five companies raised $1 billion, which represented a 15% increase in capital over last year, but there was a 3% drop in deal making overall.

Now, did consumer services do well? Of course. But that’s easily skewed by just a handful of mega-financings. Indeed, the numbers showed the increase was mostly in cash, not overall deals. Capital raised by consumer companies jumped 51% over the second quarter of 2010, but deal making was up just 7%.

When you dig in deeper, the sub-category that includes social media, entertainment and consumer Web only saw a pop of 25% in cash raised over a year ago, and saw a slight drop in deal making activity. In aggregate, consumer Web companies raised less than $1 billion in the quarter. Clearly a few big mega-financings are driving those numbers, and there’s not even enough of them to lift the top line numbers.

I’ve said it before, I’ll say it now and I’ll likely keep saying it: A handful of surging companies with heady valuations do not constitute a macro-economic phenomenon. That constitues, at worst, a handful of really overvalued companies. The only thing to suggest Silicon Valley is in a bubble are the headlines, because the numbers just still aren’t there.


Amazing Charts: Apple’s “Super-Seasonal Performance”

Yes, Apple had a massive quarter. But just how masive? This chart from Asymco puts Apple’s incredible revenue growth in perspective. Apple grew revenues by 82 percent last quarter, which is unprecedented growth for a company of its size with $28.6 billion in quarterly revenues.

As you can see from the chart, nearly all the growth in sales is coming from the iPhone (gray) and the iPad (blue). In fact, 71 percent of Apple’s sales and 78 percent of its profits now come from iOS devices. And the iPad alone is cannibalizing Mac sales.

But take a closer look at the chart. What is truly astounding is that for the first time in the last 6 years, second quarter sales peaked higher than holiday sales in the fourth quarter. Every other peak is during the holidays, but the growth of Apple’s new class of touch computing products helped push sales past that to “Super-seasonal performance,” as Horace Dediu puts it on Asymco.

We all talk about “blowout” quarters, but there is something truly unprecedented happening here. Apple’s growth is accelerating. The chart below shows growth rates for both sales and net income shooting upward for the past 7 quarters. While sales grew 82 percent, profits grew even faster at 122 percent. You’d expect to see that kind of growth from a small startup with a few million dollars in profits, not a tech giant with profits of $7.3 billion. Compared to the year before, Apple added $4 billion in profits alone.

As Idealab’s Bill Gross asks, “Has there ever been a company in history that is this big and which has grown by this big of a percentage in only 1 year?”

Tags: Analysis TC apple

July 21 2011


Federal Bureau Of Sisyphean Labors

The loosely-organized but unquestionably effective hacking group Anonymous has gotten its hands on what it claims are confidential NATO documents. It’s the latest in a line of seemingly arbitrary attacks, the arbitrariness being the result of their somewhat haphazard and crude methods. I don’t describe it this way to invite their vengeance, but as part of making a point about them. Their crudeness is part of their legitimacy.

Unsurprisingly, the response was one of boilerplate outrage, albeit with a truly classic quote from FBI deputy assistant director Steven Chabinsky: “We want to send a message that chaos on the internet is unacceptable.”

I’m hoping this one will go down in history with chestnuts like “a series of tubes” and “just don’t hold it that way.” Chabinsky continued in his interview with NPR:

…it’s entirely unacceptable to break into websites and commit unlawful acts.

The investigative opportunities that present themselves in this area are transnational. The resolution of these cases will involve international cooperation. The Internet has become so important to so many people that we have to ensure that the World Wide Web does not become the Wild Wild West.

Leaving aside the curious implication that the web was not always wild, his choice of words is interesting. “Transnational” and “international cooperation” imply a global alignment on internet issues that simply doesn’t exist, though I’m sure the well-established channels of international police cooperation function as advertised. Anonymous issued a response of sorts to Chabinsky’s words, in which they are a bit less optimistic.

Tracking and collecting hackers of this type is like herding cats that move at the speed of light. The arms race in the detection/evading detection field is lopsided, and hackers are unquestionably at an enormous advantage. They’re savvy enough to avoid the pitfalls set for them by aging heads of security, and even cooperation at the level of internet providers is unlikely to be too effective. Besides, it’s survival of the fittest: script kiddies running LOIC on their mom’s unencrypted open wifi are going to get picked up while the shrewd hacker who pays for a Swedish VPN and codes his own tools won’t even be on the radar.

The fun part is that all this hacking really isn’t even very sophisticated. I mean, it’s not something you can just pick up and do on a Sunday afternoon, but these people aren’t sneaking into access tunnels and jacking into corporate mainframes. They freely admit it; part of LulzSec’s mission was to show just how poorly protected much “secure” information is. This NATO hack (like many high-profile hacks recently) was accomplished with a little SQL injection, an embarrassing oversight by a security team that, if anything, should be far more circumspect in its work than the average security-conscious organization or company. I wouldn’t go so far to say that those who are so easily hacked deserve it, exactly, but they deserve the dressing down they get later. The Sony hacks, for instance, almost certainly harmed the consumers and as such are deplorable acts — but Sony is more deplorable for its irresponsibility and tone-deaf response.

It’s like leaving your bike unlocked on the street as a kid and coming back to find it gone. It’s not that you deserved to have your bike stolen, but you clearly don’t value it much if you don’t take even elementary precautions. You may not agree with the thief’s motives (probably mercenary), but they don’t have to be paragons of virtue to be the bearers of an important lesson: yes, this can happen to you.

To come back to Chabinsky’s claim that chaos on the Internet is unacceptable, though. Mr. Chabinsky, I admire your dedication to orderliness, but you may as well try to straighten out a rainbow. Chaos isn’t the problem — chaos is the point. I don’t envy anyone whose stated job is to reverse entropy.

July 20 2011


Lion’s Internet Recovery Feature: The Past Meets The Future

Although you won’t find any Apple users who will admit it, Macs do occasionally crash and fail, sometimes in spectacular ways. In my experience, while they have far fewer visible errors, the ones that users end up seeing are more serious than the scattered Windows annoyances and driver issues. But by and large, recovery and error management haven’t needed to be among Apple’s marquee features. Graceful failure is a merit, but not something you want listed next to Mission Control and AirDrop.

So it’s no surprise that an interesting little feature built into Lion is receiving next to no promotion — though Apple is far from hiding it. The improved recovery console is a nice feature, but it’s the Internet Recovery thing I’m more interested in.

Here’s Apple’s succinct user-facing explanation of the feature:

If your Mac problem is a little less common — your hard drive has failed or you’ve installed a hard drive without OS X, for example — Internet Recovery takes over automatically. It downloads and starts Lion Recovery directly from Apple servers over a broadband Internet connection. And your Mac has access to the same Lion Recovery features online.

Perhaps it’s not being trumpeted because it’s not really a new feature. Macs have been able to boot from a networked drive for quite some time — over a decade, in fact. The fact that it was limited to locally-administrated networks with locally-hosted disk images isn’t a limitation of NetBoot itself, but was simply a pragmatic measure considering remotely downloading even a couple hundred megs on a circa-1999 connection would be impractical. So the capability is nothing new, but making it a standard recovery feature is.

Apple is making itself the net admin and switching from a local protocol to a remote one, that’s all. Like so many cloud services, it is a possibility now because of improvements in bandwidth and storage capacity, not because of any magical new powers possessed by MacBook Airs.

An interesting bit is that this recovery mode works even if your drive is blank — as in zeroed. The normal Recovery HD stuff occupies a (quite hefty) partition of your boot drive, so Internet Recovery can’t live there if it’s to work with a fresh or scrambled drive. It must live in the onboard EFI firmware, which is reassuring but a little creepy. Even if you crack open your Mac and swap out the drive, it’s still going to wake up thinking I am a Macintosh.

From there on it’s business as usual. It loads a Recovery HD disk image from Apple’s servers and you’re off to the races. The little recovery partition is likely nothing more than the most basic graphical and executable items necessary to interface with the wifi (WPA only), store, decode the disk image, and so on.

This shouldering by Apple of bandwidth and administrative duties for non-power users is certainly indicative of their upcoming iCloud and iTunes strategies. They’ve got motive and opportunity (not to mention the cash and the hardware) to shift pretty much all your content server-side, including (though by baby steps at first) the OS itself. And the statement they want to make to the consumer and user is this: “We’ve got it.” They’re taking responsibility away from the user in other ways as well (to be discussed later), and obscuring the inside of the machine has been a priority for Apple for a decade; this is just another, slightly less visible, portion of their moving everything but the very facade of their devices away from the grasp of the user, for good or ill.

Lion will come on a USB drive next month for the rather curious price of $70, but you can save money by making your own bootable disc or drive. The “installESD.dmg” file is lurking inside your Lion installer, and making it bootable is… a job for Google. You paid for it, so do what you want with it. I’ll be damned if I’m paying $40 extra for their USB drive, so I’ll be doing this as soon as I upgrade.

[some info via this Hacker News thread]

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!