« Marco! | Main | Could American parties become symmetric? »

February 07, 2016

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Apparently 47% of US jobs are in danger being lost, but its worse for China, India and elsewhere.

http://www.economist.com/news/finance-and-economics/21689635-jobs-poor-countries-may-be-especially-vulnerable-automation-machine-earning?fsrc=rss|fec

I think this is more of a retrenchment than an end. The problem is datacenter costs are dominated by HVAC costs/engineering more than by hardware costs. Batteries make up a large portion of portable electronics cost, and, as you say, the software hasn't caught up with the hardware. I think if you look at MIPS/Dollar when you include HVAC and battery costs, you're not going to see a lot of slowdown.

Eric,

I think the economic costs of retooling and the decreasing returns on investment will effectively kill off the rest. I was in something of a privileged position on this stuff for years (working in the supercomputer industry) and we've been expecting this...and the fabs are saying, yeah, this is getting too expensive for us to chase anymore.

So!

At least for the HPC world, modest sized jobs will move into the cloud and the real HPC work will get absorbed into a handful of labs with budgets that are ridiculous: $100 million supercomputers will be the norm in about ten years and the improvement of speed will be less, much less, each iteration than before.

I'm not really sure how to parse "I think the economic costs of retooling and the decreasing returns on investment will effectively kill off the rest." Sure, MIPS/core and clock speeds are may slow down for a while, but if MIPS/watt go down, and as a result MIPS/dollar of total capital and operating costs go down, then the MIPS available to throw at any given problem will continue to go up. Which is the real measure of computers economic power to disrupt.

You may be talking past each other. (Or I may be critically confused, so feel free to school me.)

Will is thinking about the Singulartarians and their fear that the computers will wake up one day and start hunting us down with self-driving cars. Christine, but on a mass scale.

Eric is thinking about the ability of computers to displace humans in any role that can, in theory, be reduced to a series of algorithms. If Bayesian logic can uncover it, then cheap enough computers can do it. (Note that speeding the computers up is only one part of "cheap." Think net present value, here. Any cost reduction in NPV terms will make robots look better relative to people.)

In other words, Skynet is looking increasingly unlikely ... but the collapse of most human labor markets is still barreling down the pike.

I think the two of you may be in agreement. I also suspect that Carlos Yu disagree with both of you.

But I could very easily be deeply confused and profoundly incorrect about your point of disagreement. Will and Eric, am I making sense?

We may be talking past one another, but I think we actually understand one another.

Eric's argument is that the cost per watt of computing will allow computing to start growing faster through the aggregate approach of many smaller processors.

I'll point to the problem of scaling. Normal apps don't do that well with it. Even HPC apps don't scale well and reliability at crazy scale goes waaaay down.

Even if you go ridiculously parallel, its not much use on the desktop or on the phone. These are the markets that drive chip development. We've hit a point of the market isn't enough to keep Moore's going: demand has leveled off in phones even.

There's one way, in hardware, to speed up chips a lot right now: fix the memory architecture. Its an 'easy' and yet is not being done. No one wants to invest it in. I'd argue given demand, it won't produce an ROI worth it.

That's the essence of what I am also arguing with the fabs: the demand is insufficient to make retooling the fabs worthwhile. There will be ever fewer that will and this will cause an even bigger quagmire for Moore's than even the technological problems.

Economics wins again, Lewis Therin!

tl;dr: Chips are not going to get much faster than they are for economic reasons.

Relevant Link:

http://www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338

The problem of scaling isn't that bad. There's a lot of problems that can be solved on distributed architectures. That's a software problem, and one that's being solved. Google, for example currently is the 5th largest consumer of microprocessors in the world, and makes up something like 10% of the market. And for fears of the singularity, there's no reason to suppose neural networks and other AI techniques are inherently non-paralellizable (especially since our biological ones clearly are massively parallel).
I'm also skeptical about the argument that demand for mobile device speeds has topped out, since power consumption is also a major limiting factor there, given batteries far less than Moore's law level improvements. If MIPS/Watt go down, then the MIPS in our phones are going to go up.

I did a couple discreet inquiries and Google does what we call embarassingly parallel work. This means each part does not really need to communicate with the rest. It just does its work and is happy. A huge farm of processors works really well for that.

While there are independent chunks of AI work, most of it requires a fair amount of communication. It tends to what we call 'massively parallel' and that requires a lot of communication overhead.

The former can get a close to linear speed up and is easier to manage, but is far more robust for failures. You just rerun the failed job. The only 'real' danger is when a node goes bad and runs through the jobs without actually doing the work. This causes a massive restart for the pipeline. The better pipe lines catch this. Many out in the wild do not.

The latter, MP style, scaling is a serious problem. Most HPC problems do not scale up to even 4k cores. The problem is the algorithms just don't work well past a certain point and the communications protocols (like MPI) don't work great either. You definitely don't get a linear speed up. After a point you start getting a serious drop off in performance in almost all cases.

With regards to the memory architecture, processors, at least on the PC, often sit idle waiting for data to be retrieved from memory. This could be fixed and could have been fixed a decade ago for significant speed gains, but has not been done.

WRT smart phones. The only way there's a huge speed up in it or worth while is if we expect more from the phones. IE we want them to be PCs. Or Macs. That's possible and even a good thing, but I haven't seen real movement in that direction. Smart phone demand is tapering off. And unless there's a significant breakthrough, there'll be little interest in investing too much.

Think of it as being like steam engines. We can make some damned impressive ones, ones that might even be as efficient as internal combustion or if it can be believed, more. Yet, little to no interest in investment.

The 737 argument that even Intel is touting fits really well.

We're at the top of the J curve or very close.

Training a neural net is MP. Getting google vision to recognize cats is probably not going to even scale up to thousands of nodes any time soon. Using a neural net to recognize cats in a set of photos is EP. Each photo can be run on a different cluster. Substitute "your face" for "cat" and "video feed" for "photo", and I think we have an EP problem that will give Noel AI nightmares for a week.

Even in computational science (my former field), there's a surprising amount of EP work. I worked in combustion chemistry, so we'll use that as an example. So if you want to do a CFD simulation on a combusting flow, you need the potential energy surfaces of all the reactions. Each point on the PE surface can be calculated independently of the rest, although each point is probably an MP problem best solved on hundreds of nodes (you could use some information from one point to speed the calculation of neighboring points, but if scaling that up becomes challenging, you can still get linear speedups on the brute force EP solution). Then getting reaction rates for each reaction under different temperature pressure regimes is an EP ensemble of MP (or, really, single-processor, or EP, depending on your method) calculations. The actual CFD calculation is pretty amenable to scaling if you use a simplified radiation model, and even if not, it's probably not the limiting step, compared to getting the reaction rates.

WRT memory architecture, it hasn't been worthwile because bigger gains have been available by chasing Moore's law type improvements. If those are getting limited (as we pretty much agree), then they become more interesting, as long as the market for MIPS doesn't shrink. I'm arguing that there's enough EP problems out there, and enough MP problems which aren't being addressed because the software hasn't been written yet, that the improvements we can still make in computing technology that improve MIPS/$ will still be profitable (e.g. memory speeds, or power consumption), even if improving transistors/cm^2 or clock speeds are no longer profitable. And as long as MIPS/$ improve, we can expect to see the demand for computing power continue to increase.

As for needing more performance in smart phones, I refer you back to the post on Virtual Reality for one reason we might.

w.r.t. steam engines, the steam turbines used in coal power plants are 42% efficient, compared to a maximum carnot efficiency of 63%. a) they're being built, the investment was done b) there isn't a lot of room for improvement there, they're pretty close to, or beyond, where we are in the transistors/cm^2 race. If you just mean for cars, no, they're not being invested in, because the money is chasing easier/more profitable improvements (hybrids, electric cars, HCCI engines).

The comments to this entry are closed.

Categories