Why develop new products during a recession

November 10th, 2008

For the past year and a half I’ve been driving past a new BMW dealership as it is being built. The project started just before the beginning of the sub-prime saga when the economy was still good, credit was easy, and people were lining up to buy new cars. Now the new building is almost ready, the economy is in a bad shape, and dealerships are struggling to stay afloat.

A number of prominent VCs published letters they sent to their companies on how to survive the downturn. The standard advice includes not hiring, shutting down or cutting R&D, and making everyone, including receptionists, sell. This approach, which boils down to getting as much cash in and as little out, sounds logical, especially for a startup strapped for cash. But what if a company has cash reserves sufficient to last several years even if the sales dried up completely? Is there a better strategy than hibernating until the economic sprint comes back?

A company that operates in the survival mode during downturns ramps up new product development during boom times. In a good economy financing is easy and, as a result, many new companies are being started. There is an increasing demand for engineers and there is a lot of noise from all the new products being introduced into the market.

During a downturn, such a company concentrates on sales which are harder and harder to get (unless the company is selling something that is in demand during a recession). Sales people, at least the good ones, would be let go as the last resort. Since the majority of companies tend to operate in the survival mode, there is not much opportunity to improve the quality of the sales team, at least not until later when companies start running out of cash. This company got an uphill battle in both good and bad economic conditions.

Let’s now examine how the contrarian approach works, assuming the company has enough cash to survive several years with significantly reduced sales. Such a company would ramp up new product development during the downturn and slash down the sales effort, perhaps even purging the sales team. At this time it should be easier and cheaper to pick up quality engineers since there are more of them on the market and there is less competition from other companies. It is also easier to introduce new products and appear as a market leader during a recession.

Towards the end of the downturn the company can try to improve the quality of its sales team by hiring people from failed companies. As boom times come back, the contrarian approach yields new products ready for the market and the sales team ready for the renewed interest. At this point the company becomes cautious of any aggressive expansions as costs increase. Instead, it concentrates on accumulating enough cash to repeat the cycle when the economy turns bad again.

During boom times companies rush to get to market as quickly as possible in order not to miss opportunities. A downturn, therefore, could be a perfect time to develop and introduce radical and unproven new technology that can take years to get right.

The contrarian approach is logical for a bootstrapped or established business that got a chance to accumulate substantial cash reserves during a boom. It is the way investing, especially the VC type, works that forces companies into the survival mode. Raising venture capital in a good economy is a lot easier than during a downturn. It is also easier to get investment for an idea that is in a “hot” market such as e-commerce during the dot-com boom and social networks more recently. VCs also expect their companies to expand rapidly. This makes a VC-funded company burn cash by rapidly growing in a crowded and noisy field with expensive and scarce engineers.

There are other advantages of expanding during a recession. Office space becomes cheaper as the demand slows. It is easier to negotiate better deals with suppliers and partners as they become dependent on the revenue your business brings. Tax incentives for R&D, starting new businesses, and hiring people are often introduced during recessions to revive the economy. It is also well known that teams become more focused and work harder in the face of a powerful enemy. Recession can be such an enemy. Boom times have the excitement of the overall activity in the field as well as easy sales. But excitement is fleeting while the resolve to outlast a recession stays.

The contrarian approach is not without risks, the biggest of which is running out of cash before the downturn ends. The other problem is finding early customers to use your product and provide feedback. However, one can offer the initial version for free or at a significant discount which may work rather well during a recession when customers presumably need your product but simply cannot afford to pay the full price at the moment.

The survival approach is not without risks either. The biggest of which is expanding into a recession, as the BMW dealership example above illustrates. As with the contrarian approach, there is also the possibility of failing to preserve enough cash and generate enough sales to weather a recession.

How many cores do we really need

November 2nd, 2008

After CPU manufacturers have hit the frequency wall, adding cores became the new way of making “better” processors. However, there does not seem to be much discussion on whether additional cores actually improve performance for common, real-life use cases. After a release of a new CPU we see a slew of performance reports most of which use synthetic benchmarks. If those benchmarks are able to take advantage of additional cores then we often see significant performance improvements as the number of cores increase.

While it may be hard to conduct a precise performance comparison of real use cases (e.g., how does one measure the performance of a word processor?), we can determine a set of common applications used in a particular setting. We can then analyze the typical load for each of these applications and reason whether additional cores will improve their performance.

It seems natural to divide all computer use-cases into three broad categories: Desktops (including laptops), Workstations, and Servers. I am intentionally ignoring the high-performance computing (HPC) as being too specialized. Typical applications for a desktop machine include office suite, email client, web browser, instant messenger, audio/video player, and a photo management application. Desktops for home use normally also include games.

The common property of most of these applications is that they are user input and/or network-bound. That is, most of their time they are waiting for user input or data arriving over the network. The few applications that have a CPU-intensive workload (e.g., audio/video player and photo management application) are not easily parallelizable. Only the photo management application and games have the potential for performing several CPU-intensive tasks concurrently (e.g., enhancing or resizing a bunch of photos). For games, however, a more powerful graphics card is often a more effective and cheaper way to increase the performance.

From this analysis it becomes quite obvious that a typical desktop CPU usage pattern consists of a mostly idle state with some bursts of activity usually associated with responding to user input or availability of network data. It is also clear that adding a second or any subsequent core to a desktop won’t improve the performance of its common applications while a better-performing single-core CPU probably would. A second core might be beneficial to a few applications and can also improve the responsiveness of the system in the case of a CPU-intensive task running on the background (e.g., batch photo processing). Furthermore, having extra cores in a power-constrained machine (e.g., laptop) can actually be a disadvantage unless extra cores can be completely shut down.

Some of the alternative paths used to improve the performance of desktop systems include the higher-performance memory subsystem, such as faster memory buses and larger caches as well as specialized processors. An example of the latter approach is the use of the modern GPU’s stream processing capabilities in general applications.

Besides the desktop applications mentioned above, workstations usually include one or more specialized applications, such as compilers, CAD applications, or graphics/video/sound editing software, which normally have CPU-intensive workloads. Having additional CPUs and/or cores in a workstation often improves the performance of the specialized application but to which degree depends on how well the workload can be processed in parallel. For example, C and C++ compilation can often be performed in parallel and, on big projects, one can add extra cores and achieve better build times until a memory or disk subsystem becomes a bottleneck. On the other hand, single-stream video encoding can be a lot harder to parallelize.

Servers are where multi-core CPUs have the most potential. Server applications are naturally parallelizable since they often need to perform the same or similar tasks concurrently. However, some server applications, for example database management software, may have a hard time scaling to a truly large number of CPU/cores because of the need for exclusive access to shared resources. Other applications, for example web servers and stateless application servers, can take advantage of truly massively-parallel systems. Virtualization software is another class of applications which can benefit greatly from multi-core CPUs.

BMW Munich Plant

October 19th, 2008

A few weeks ago I went on a 2.5 hour tour of the BMW Munich Plant. The tour takes you through the major steps of building 3-series sedans and hatchbacks, including the press shop, body welding, paint shop, engine assembly, final assembly, and testing. While it all looked pretty cool and high-tech, my primary interest was in how BMW organizes this fairly complex process. Below are some of the interesting tidbits I got out of the tour guide (separate tours are given in both German and English).

The press shop uses off-the-shelf presses with the tools (actual parts that deform the flat metal sheet into a body part) made in house by BMW. The same set of presses is used to make different parts. The shop makes a number of parts of one kind then the tools are changed and the same presses make a different part. Quite a bit of the floor space is taken up by the tools, massive pieces of metal about 2 by 2 meters and half a meter thick. It takes 30 minutes to change the tools in a press. It takes about a week to move the shop to a different factory. Parts that come out of the presses are first inspected manually for any visible defects.

The parts made by the press shop are fed by conveyers to the body welding area. The welding itself is done by robots (made by KUKA) with occasional humans moving parts from a conveyer to the robot’s intake tray. Due to space constraints several robots are working simultaneously at any single station. In one station 12 robots are working at the same time which is apparently the industry record. Each robot normally performs several functions, for example, lifting and carrying a panel, applying glue, and welding. Synchronizing these robots’ movements so that they don’t hit each other must be an interesting job.

What’s notable is that the body shapes (sedan vs hatchback) are not aggregated into batches. Instead you see a sedan body followed by a hatchback into the same welding area and the robots pick up different parts and weld them in different places. I asked the guide how the robots know which type of body they are working on. Apparently each body is fitted with a transponder that contains the body configuration. When the body arrives at the station this information is read and the appropriate program is selected.

This sounds quite smart and simple but in reality there are probably quite a few complications down the line. For example, here and later on during the final assembly, different parts need to be delivered to the station depending on the car being built. And since most of the parts are delivered by conveyers, it needs to be scheduled well ahead of time.

After the body is welded it undergoes multi-stage paint work. Here everything is also automated with robots opening doors, painting inside, and closing them back. The bodies are aggregated into batches based on color. You still see a sedan following a hatchback with the robots painting them accordingly. At this stage the bodies do not belong to any particular customer. Instead BMW uses statistics to anticipate how many bodies of a particular shape and color will be ordered.

The engine assembly is mostly manual work and is somewhat disconnected from the rest of the factory in that the engines built at the Munich plant are not put into the cars built there. Instead they are shipped to other plants and engines needed for the 3-series are shipped from other plants to Munich.

In the final assembly a chassis and a body are attached to each other (called marriage). After this point the car belongs to a particular customer. The rest of the line is mostly manual work of installing and connecting various bits and pieces.

After the car is assembled it is tested. A person drives the car to a special booth where the wheels can spin freely on rollers. There is a screen in front of the driver with test instructions and the driver has some sort of a device to confirm completion of various operations. The driver tests basic functionalities like lights and the horn. Then the engine is started and the driver “drives” the car through each gear with the screen showing which gear and speed he should be at. To me the test seemed surprisingly superficial, lasting only a couple of minutes. At the end of the test the car is loaded onto a train wagon for delivery (the tracks come right into the plant).