Go to end of page
Statements posted here are those of our readers and do not represent the BaseballThinkFactory. Names are provided by the poster and are not verified. We ask that posters follow our submission policy. Please report any inappropriate comments.
Page 6 of 6 pages
Morty--I'm sure I don't know Dawkins as well as you do--is The Selfish Gene the best place to start getting better acquainted?
Also, the sequester can't be unpopular since it hasn't had any effect yet
And if you are interested in evolution and Darwin's theories and views on evolution, DAwkins's The Greatest Show on Earth has been called the best popular presentation, although Jerry Coyne's Why Evolution is True (he keeps it as simple as his title) is very good and very quick (it's about 130 pages long).
It's called exponential growth for a reason,
I'm skeptical that computing speed's exponential growth can continue indefinitely. As with most examples of exponential growth, there are physical limits -- Planck's constant, atomic scales, the speed of light, etc. -- that will take effect at some point. Quantum computing looks to be only a piece of the answer.
It may be possible to use a black hole as a data storage and/or computing device, if a practical mechanism for extraction of contained information can be found. Such extraction may in principle be possible (Stephen Hawking's proposed resolution to the black hole information paradox). This would achieve storage density exactly equal to the Bekenstein Bound. Professor Seth Lloyd calculated the computational abilities of an "ultimate laptop" formed by compressing a kilogram of matter into a black hole of radius 1.485 × 10?27 meters, concluding that it would only last about 10?19 seconds before evaporating due to Hawking radiation, but that during this brief time it could compute at a rate of about 5 × 1050 operations per second, ultimately performing about 10 to the 32nd operations on 10 to the 16th bits. Lloyd notes that "Interestingly, although this hypothetical computation is performed at ultra-high densities and speeds, the total number of bits available to be processed is not far from the number available to current computers operating in more familiar surroundings.
As with most examples of exponential growth, there are physical limits -- Planck's constant, atomic scales, the speed of light, etc. -- that will take effect at some point. Quantum computing looks to be only a piece of the answer.
Well, first, i think some more operational definitions are in order. Let’s assume “human-level AI” means “capable of non-emotional reasoning and problem solving at the level of the average human in realtime”. Basically, what we currently understand as the functionality of the neocortex (Strong AI). I think the “in realtime” is important, because if we can run an AGI at 1/1000th the speed of a human brain (still quite a feat), we’re quite a few years from being able to interact with it.
Quite a few people picked 2030 [as the year of human level, Strong AI being developed], but i didn’t see any real reasoning behind the number other than increasing computing speeds. L Zoel touched on simulation, this seems like a good baseline for a pessimistic projection. Let’s use the Blue Brain project as our benchmark, since it’s the farthest along in true neuronal simulation (with interconnects and not simple point neurons).
Blue Brain can simulate a rat-level neocortical column (~10,000 neurons) in realtime on IBM Blue Gene/L supercomputer (36 TFLOPS). These are advanced neuronal simulations at the cellular level, including interconnects between neurons. A human neocortical column has ~50,000 neurons (varies of course). Assuming the complexity squares with increasing NCC’s (due to interconnects), 25x more computational power is required to simulate 1 neocortical column, roughly 1 petaflop, in the range of the fastest supercomputers today.
The human neocortex has between 2-5 million neocortical columns. This means that a zettaflop computer (1 million times more powerful than today’s fastest supercomputers) is required to run the blue brain simulation, in its current state, on the scale of a human neocortex.
Now, this is incredibly inefficient. We aren’t actually writing intelligence algorithms, just simulating the brain down the cellular level. A fellow from Sandia labs predicts that with a zettaflop computer, we could model the entire world’s weather patterns at a resolution of under 100m for 2 weeks. Clearly this is far beyond the scope of what 1 human brain is capable of, yet the hardware required to do both is identical. I think it speaks to the inefficiency of the simulation, and the potential for simplification of an AI model.
But even with this pessimistic outcome of AI, if the colloquial version of Moore’s Law holds, by 2030 we have the processing power to do this. Any other advances in actual AI algorithms (Jeff Hawkins’ Nupic software excels at the pattern recognition many here have mentioned, i think his HTM theory holds much promise) could speed things along. I think 2030-2050 is a sure thing if computers keep pace, and it looks like they will, to me.
Shrinking MOSFETs down to 16nm by 2016, 3d chip stacking, optical chip interconnects, self-assembling CNTFETS, graphene clock multipliers, these are all things being experimented with and tested now that don’t require any wildcard technologies (like quantum computing, single photon transistors, molecular computing, etc).
2030-2050 has my vote… [for the development of Strong AI at the level of the human brain].
Biology, Biochemistry & Genetics on the other hand is just beginning it's exponential explosion (e.g, cost of DNA sequencing is currently Supra-moore's law).
Btw, I've assumed the device McCoy is waving is a portable MRI (among other things), with real-time, high definition reading and diagnosis, natch. That means he's packing zettaflop power at least into a salt shaker.
I assume you could regrow organs quickly in TOS
Though in 2013 what we have is presumably generic skin or more probably a skin-like substance, and not one that tailors itself to mimic your own skin's DNA the way it surely would in 2450ish.
-MRI and PET and other non-invasive imaging, though the machines are much larger.
I assume you could regrow organs quickly in TOS (massive repairs were done from time to time), though I don't remember specific examples,
In Star Trek IV they were on present day earth and McCoy gave someone a pill that regrew their kidney(?).
In November 2011, a group of MIT researchers created the first computer chip that mimics the analog, ion-based communication in a synapse between two neurons using 400 transistors and standard CMOS manufacturing techniques.
In June 2012, Spintronic Researchers at Purdue presented a paper on design for a neuromorphic chip using lateral spin valves and memristors. They argue that the architecture they have designed works in a similar way to neurons and can therefore be used to test various ways of reproducing the brain’s processing ability. In addition, they are significantly more energy efficient than conventional chips.
Within the brain, neurons all have electrical voices, each singing out in harmony with millions of others, a complex choir of information processing from which emerges the crown jewel of the human being – the conscious mind.
As described in The Blue Brain Way: Creating ‘singing’ neurons, it’s necessary to first create the voices of neurons – i.e. create electrical models that are representative of the full diversity of electrical behaviors exhibited by neurons. These are the e-types. Secondly, it is necessary to transplant these voices into the correct ‘voice boxes’ or neuron morphologies to create me-types (morpho-electrical types).
We have to do that if we're going to aim at comprehensive theories of Being, but it also means we'll occasionally look like fools. The alternative is to theorize only narrowly, from subjects we've mastered.
Also, the sequester can't be unpopular since it hasn't had any effect yet; what's unpopular is the scare tactics being announced by Obama and reported uncritically by the liberal media, in which cutting an infinitesimal portion of the federal budget, will somehow cut every popular program but nothing unpopular.
Why are we limited to only theorizing? We have at our disposal the scientific method, right? If Kurzweil wants to be taken seriously, he can construct a model and experiment to validate his theories. Then his critics can review and duplicate, if possible, his findings. Short of that, its all just mental masturbation.
In what way does thinking involve processing a stimulus and categorizing it? When I am thinking about London while in Miami I am not recognizing any presented stimulus as London—since I am not perceiving London with my senses. There is no perceptual recognition going on at all in thinking about an absent object. So pattern recognition cannot be the essential nature of thought. This point seems totally obvious and quite devastating, yet Kurzweil has nothing to say about it, not even acknowledging the problem.
You must be Registered and Logged In to post comments.
Login to Join (0 members)
Page rendered in 4.4360 seconds, 49 querie(s) executed