Maupertuis's Principle (of Least Action) is not quite the same as Darwin's Principle of Random Variation and Selective Retention (i.e., automatic design based on the post hoc adaptive advantages -- for survival and reproduction -- of natural developmental or random variations). But the two would be ominously close if it weren't for the (subsequent) discovery of evolution's mechanism: Mendelian genetics and eventually the DNA double helix.
That said, there nevertheless is a big difference between Biology and Physics: Physics is studying the basic laws of the universe, whereas Biology is mostly very local reverse-engineering: Figuring out how (naturally designed and selected, via DNA variation/retention) devices (organs, organisms, biological systems) work, by reverse-engineering them. This is exactly the same as forward engineering, which applies the laws of physics and the principles of engineering in order to design and build systems useful to Man: Biology simply takes already built ones and tries to figure out what lows of physics and principles of engineering underlie them and make them work.
In contrast, Physics is not, I think, usefully thought of as merely reverse-engineering designed systems (e.g., the universe or the atom). The laws of physics precede and underlie all the possible systems that can be designed and built by either engineers, or the Blind Watchmaker.
In reality the "semantic web" is, and can only ever be, a ''syntactic web''. Syntax is merely form -- the shape of arbitrary objects called symbols , within a formal notational system adopted by an agreed and shared convention. Computation is the rule-based manipulation of those symbols, with the rules and manipulations ("algorithms") based purely and mechanically on the shapes of the symbols, not their meaning -- even though most of the individual symbols as well as the combinations of symbols are systematically interpretable (by human minds) as having meaning.
Semantics, in contrast, concerns the meanings of the symbols, not their shape, or the syntactic manipulation of their shapes. The "symbol grounding problem" is the problem of how symbols get their meanings, i.e., their semantics, and the problem is not yet solved. It is clear that symbols in the brain are grounded, but we do not yet know how. It is likely that grounding is related to our sensorimotor capacity (how we are able to perceive, recognise and manipulate objects and states), but so far that looks as if it will only connect symbols to their referents, not yet to their meaning. Frege's notion of "sense", which is again just syntactic, because it consists of syntactic rules, still does not capture meaning. Nor does formal model-theoretic semantics, which likewise merely finds another syntactic object or system that follows the same rules as those of the syntactic object or system for which we are seeking the meaning.
So whereas sensorimotor grounding -- as in a robot that can pass the Turing Test -- does break out of the syntactic circle, it does not really get us to meaning (though it may be as far as cognitive science will ever be able to get us, because meaning may be related to the perhaps insoluble problem of consciousness).
Where does that leave the "semantic web"? As merely an ungrounded syntactic network. Like many useful symbol systems and artificial "neural networks", the network of labels, links and connectivity of the web can compute useful answers for us, has interesting, systematic correlates (e.g., as in latent "semantic" analysis, and can be given a systematic semantic interpretation (by our minds). But it remains merely a syntactic web, not a semantic one
Kevin Kelly thinks the web is not only what it really is -- which is a huge peripheral memory and reference source, along with usage stats -- but also a kind of independent thinking brain.
It's not, even though it has connections, as does a neural net (which is likewise not a thinking brain).
KK is right that googling is replacing the consultation of our own onboard memories, but that is par for the course, ever since our species first began using external memories to increase our total information storage and processing capacity: Mimesis, language and writing were earlier, and more dramatic precursors. (We're talking heads who already feel as helpless without our interlocutors, tapes and texts today as KK says we all will -- and some already do today -- without the web.)
And KK misses the fact that the brain is not, in fact, just a syntactic machine, the way the web is: There is no "semantic" web, just an increasingly rich "syntactic web".
Nor (in my opinion) is the web's most revolutionary potential in its role of periperal mega-memory and hyper-encyclopedia/almanac. It is not even -- though it comes closer -- in its interactive Hyde-park role in blogs and wikipedias. That's just an extension of Call-In Chat Shows, Reality TV, acting-out, and everyone-wants-to-be-a-star. We've all had the capacity to talk for hundreds of thousands of years, but most of us have not found very much worth saying -- or hearing by most others. The nature of the gaussian distribution is such that that is bound to remain a demographic rarity, even if the collective baseline rises -- which I am not at all sure it's doing! We just re-scale...
No, I think the real cognitive-killer-app of the web is the quote/commentary capability, but done openly -- "skywriting". At the vast bottom level this will just be the Hyde-Park "you know what's wrong with the world dontcha?" pub-wisdom of the masses, gaussian noise. But in some more selective, rigorous and answerable reaches of cyberspace -- corresponding roughly to what refereed, published science and scholarship used to be in the Gutenberg era -- remarkable PostGutenberg efflorescences are waiting to happen: waiting only for the right demography to converge there, along with its writings, all Open Access, so the skywriting can begin in earnest.