╌>

I confess, I'm scared of the next generation of supercomputers | TechRadar

  

Category:  Health, Science & Technology

Via:  tig  •  4 years ago  •  35 comments

By:   Joel Khalili (TechRadar)

I confess, I'm scared of the next generation of supercomputers | TechRadar
Exascale computing has been in the works for years - and is now on the immediate horizon.

Imagine a single computer that has the processing power of 415,000 of the best modern desktop computers working together.

This machine can conduct 415.5 petaFLOPS.  (A FLOP is a floating point operation ... such as dividing two floating point numbers.)  Peta (quadrillion) is the next higher level after tera (trillion) which is after giga (billion) which is after mega (million).   Within a year it is expected that we will have a machine capable of exaFLOPS:  1 exaFLOP is 1,000 petaFLOPS or 1 quintillion FLOPS.   To put this in perspective, 1 exaFLOP is this many floating point operations in a single second:

1,000,000,000,000,000,000 floating point operations in a single second.

One of the best modern processors for a desktop, the Intel i9-7980XE, can handle about 1 teraFLOPs so imagine a million i9 CPUs, in effect, in one machine.

It has been estimated that the Earth has ~ 7-8 quintillion grains of sand.   An exaFLOP machine would be able to execute a floating point operation for each grain of sand on the planet in 7-8 seconds.

Science fiction coming soon so better brush up on the names of the ridiculously large numerical magnitudes.


S E E D E D   C O N T E N T



Earlier this year, a Japanese supercomputer built on Arm-based Fujitsu A64FX processors snatched the crown of world's fastest machine, blowing incumbent leader IBM Summit out of the water.

Fugaku, as the machine is known, achieved 415.5 petaFLOPS by the popular High Performance Linpack (HPL) benchmark, which is almost three times the score of the IBM machine (148.5 petaFLOPS).

It also topped the rankings for Graph 500, HPL-AI and HPCH workloads - a feat never before achieved in the world of high performance computing (HPC).

  • We've built a list of the best workstations out there
  • Here's our choice of the best video editing computers available
  • Check out our list of the best mobile workstations around

Modern supercomputers are now edging ever-closer to the landmark figure of one exaFLOPS (equal to 1,000 petaFLOPS), commonly described as the exascale barrier. In fact, Fugaku itself can already achieve one exaFLOPS, but only in lower precision modes.

The consensus among the experts we spoke to is that a single machine will breach the exascale barrier within the next 6 - 24 months, unlocking a wealth of possibilities in the fields of medical research, climate forecasting, cybersecurity and more.

But what is an exaFLOPS? And what will it mean to break the exascale milestone, pursued doggedly for more than a decade?

The exascale barrier


To understand what it means to achieve exascale computing, it's important to first understand what is meant by FLOPS, which stands for floating point operations per second.

A floating point operation is any mathematical calculation (i.e. addition, subtraction, multiplication or division) that involves a number containing a decimal (e.g. 3.0 - a floating point number), as opposed to a number without a decimal (e.g. 3 - a binary integer). Calculations involving decimals are typically more complex and therefore take longer to solve.

An exascale computer can perform 10^18 (one quintillion/100,000,000,000,000,000) of these mathematical calculations every second.

For context, to equal the number of calculations an exascale computer can process in a single second, an individual would have to perform one sum every second for 31,688,765,000 years.

The PC I'm using right now, meanwhile, is able to reach 147 billion FLOPS (or 0.00000014723 exaFLOPS), outperforming the fastest supercomputer of 1993, the Intel Paragon (143.4 billion FLOPS).

This both underscores how far computing has come in the last three decades and puts into perspective the extreme performance levels attained by the leading supercomputers today.

The key to building a machine capable of reaching one exaFLOPS is optimization at the processing, storage and software layers.

The hardware must be small and powerful enough to pack together and reach the necessary speeds, the storage capacious and fast enough to serve up the data and the software scalable and programmable enough to make full use of the hardware.

For example, there comes a point at which adding more processors to a supercomputer will no longer affect its speed, because the application is not sufficiently optimized. The only way governments and private businesses will realize a full return on HPC hardware investment is through an equivalent investment in software.

Organizations such as the Exascale Computing Project (EPC) the ExCALIBUR programme are interested in solving precisely this problem. Those involved claim a renewed focus on algorithm and application development is required in order to harness the full power and scope of exascale.

Achieving the delicate balance between software and hardware, in an energy efficient manner and avoiding an impractically low mean time between failures (MTBF) score (the time that elapses before a system breaks down under strain) is the challenge facing the HPC industry.

"15 years ago as we started the discussion on exascale, we hypothesized that it would need to be done in 20 mega-watts (MW); later that was changed to 40 MW. With Fugaku, we see that we are about halfway to a 64-bit exaFLOPS at the 40 MW power envelope, which shows that an exaFLOPS is in reach today," explained Brent Gorda, Senior Director HPC at UK-based chip manufacturer Arm.

"We could hit an exaFLOPS now with sufficient funding to build and run a system. [But] the size of the system is likely to be such that MTBF is measured in single digit number-of-days based on today's technologies and the number of components necessary to reach these levels of performance."

What other factors are at play?


When it comes to building a machine capable of breaching the exascale barrier, there are a number of other factors at play, beyond technological feasibility. An exascale computer can only come into being once an equilibrium has been reached at the intersection of technology, economics and politics.

"One could in theory build an exascale system today by packing in enough CPUs, GPUs and DPUs. But what about economic viability?" said Gilad Shainer of NVIDIA Mellanox, the firm behind the Infiniband technology (the fabric that links the various hardware components) found in seven of the ten fastest supercomputers.

"Improvements in computing technologies, silicon processing, more efficient use of power and so on all help to increase efficiency and make exascale computing an economic objective - as opposed to a sort of sporting achievement."

According to Paul Calleja, who heads up computing research at the University of Cambridge and is working with Dell on the Open Exascale Lab, Fugaku is an excellent example of what is theoretically possible today, but is also impractical by virtually any other metric.

"If you look back at Japanese supercomputers, historically there's only ever been one of them made. They have beautifully exquisite architectures, but they're so stupidly expensive and proprietary that no one else could afford one," he told TechRadar Pro.

"[Japanese organizations] like these really large technology demonstrators, which are very useful in industry because it shows the direction of travel and pushes advancements, but those kinds of advancements are very expensive and not sustainable, scalable or replicable."

So, in this sense, there are two separate exascale landmarks; the theoretical barrier, which will likely be met first by a machine of Fugaku's ilk (a "technological demonstrator"), and the practical barrier, which will see exascale computing deployed en masse.

Geopolitical factors will also play a role in how quickly the exascale barrier is breached. Researchers and engineers might focus exclusively on the technological feat, but the institutions and governments funding HPC research are likely motivated by different considerations.

"Exascale computing is not just about reaching theoretical targets, it is about creating the ability to tackle problems that have been previously intractable," said Andy Grant, Vice President HPC & Big Data at IT services firm Atos, influential in the fields of HPC and quantum computing.

"Those that are developing exascale technologies are not doing it merely to have the fastest supercomputer in the world, but to maintain international competitiveness, security and defence."

"In Japan, their new machine is roughly 2.8x more powerful than the now-second place system. In broad terms, that will enable Japanese researchers to address problems that are 2.8x more complex. In the context of international competitiveness, that creates a significant advantage."

In years gone by, rival nations fought it out in the trenches or competed to see who could place the first human on the moon. But computing may well become the frontier at which the next arms race takes place; supremacy in the field of HPC might prove just as politically important as military strength.

What can we do with exascale computing?


Once exascale computers become an established resource - available for businesses, scientists and academics to draw upon - a wealth of possibilities will be unlocked across a wide variety of sectors.

HPC could prove revelatory in the fields of clinical medicine and genomics, for example, which require vast amounts of compute power to conduct molecular modelling, simulate interactions between compounds and sequence genomes.

In fact, IBM Summit and a host of other modern supercomputers are being used to identify chemical compounds that could contribute to the fight against coronavirus. The Covid-19 High Performance Computing Consortium assembled 16 supercomputers, accounting for an aggregate of 330 petaFLOPS - but imagine how much more quickly research could be conducted using a fleet of machines capable of reaching 1,000 petaFLOPS on their own.

Artificial intelligence, meanwhile, is another cross-disciplinary domain that will be transformed with the arrival of exascale computing. The ability to analyze ever-larger datasets will improve the ability of AI models to make accurate forecasts (contingent on the quality of data fed into the system) that could be applied to virtually any industry, from cybersecurity to e-commerce, manufacturing, logistics, banking, education and many more.

As explained by Rashid Mansoor, CTO at UK supercomputing startup Hadean, the value of supercomputing lies in the ability to make an accurate projection (of any variety).

"The primary purpose of a supercomputer is to compute some real-world phenomenon to provide a prediction. The prediction could be the way proteins interact, the way a disease spreads through the population, how air moves over an aerofoil or electromagnetic fields interact with a spacecraft during re-entry," he told TechRadar Pro.

"Raw performance such as the HPL benchmark simply indicates that we can model bigger and more complex systems to a greater degree of accuracy. One thing that the history of computing has shown us is that the demand for computing power is insatiable."

Other commonly cited areas that will benefit significantly from the arrival of exascale include brain mapping, weather and climate forecasting, product design and astronomy, but it's also likely that brand new use cases will emerge as well.

"The desired workloads and the technology to perform them form a virtuous circle. The faster and more performant the computers, the more complex problems we can solve and the faster the discovery of new problems," explained Shainer.

"What we can be sure of is that we will see the continuous needs or ever growing demands for more performance capabilities in order to solve the unsolvable. Once this is solved, we will find the new unsolvable."

What about zettascale?


By all accounts, the exascale barrier will likely fall within the next two years, but the HPC industry will then turn its attention to the next objective, because the work is never done.

Some might point to quantum computers, which approach problem solving in an entirely different way to classical machines (exploiting symmetries to speed up processing), allowing for far greater scale. However, there are also problems to which quantum computing cannot be applied.

"Mid-term (10 year) prospects for quantum computing are starting to shape up, as are other technologies. These will be more specialized where a quantum computer will very likely show up as an application accelerator for problems that relate to logistics first. They won't completely replace the need for current architectures for IT/data processing," explained Gorda.

As Mansoor puts it, "on certain problems even a small quantum computer can be exponentially faster than all of the classical computing power on earth combined. Yet on other problems, a quantum computer could be slower than a pocket calculator."

The next logical landmark for traditional computing, then, would be one zettaFLOPS, equal to 1,000 exaFLOPS or 1,000,000 petaFLOPS.

Chinese researchers predicted in 2018 that the first zettascale system will come online in 2035, paving the way for "new computing paradigms". The paper itself reads like science fiction, at least for the layman:

"To realize these metrics, micro-architectures will evolve to consist of more diverse and heterogeneous components. Many forms of specialized accelerators are likely to co-exist to boost HPC in a joint effort. Enabled by new interconnect materials such as photonic crystal, fully optical interconnecting systems may come into use."

Assuming one exaFLOPS is reached by 2022, 14 years will have elapsed between the creation of the first petascale and first exascale systems. The first terascale machine, meanwhile, was constructed in 1996, 12 years before the petascale barrier was breached.

If this pattern were to continue, the Chinese researchers' estimate would look relatively sensible, but there are firm question marks over the validity of zettascale projections.

While experts are confident in their predicted exascale timelines, none would venture a guess at when zettascale might arrive without prefacing their estimate with a long list of caveats.

"Is that an interesting subject? Because to be honest with you, it's so not obtainable. To imagine how we could go 1000x beyond [one exaFLOPS] is not a conversation anyone could have, unless they're just making it up," said Calleja, asked about the concept of zettascale.

Others were more willing to theorize, but equally reticent to guess at a specific timeline. According to Grant, the way zettascale machines process information will be unlike any supercomputer in existence today.

"[Zettascale systems] will be data-centric, meaning components will move to the data rather than the other way around, as data volumes are likely to be so large that moving data will be too expensive. Regardless, predicting what they might look like is all guesswork for now," he said.

It is also possible that the decentralized model might be the fastest route to achieving zettascale, with millions of less powerful devices working in unison to form a collective supercomputer more powerful than any single machine (as put into practice by the SETI Institute).

As noted by Saurabh Vij, CEO of distributed supercomputing firm Q Blocks, decentralized systems address a number of problems facing the HPC industry today, namely surrounding building and maintenance costs. They are also accessible to a much wider range of users and therefore democratize access to supercomputing resources in a way that is not otherwise possible.

"There are benefits to a centralized architecture, but the cost and maintenance barrier overshadows them. [Centralized systems] also alienate a large base of customer groups that could benefit," he said.

"We think a better way is to connect distributed nodes together in a reliable and secure manner. It wouldn't be too aggressive to say that, 5 years from now, your smartphone could be part of a giant distributed supercomputer, making money for you while you sleep by solving computational problems for industry," he added.

However, incentivizing network nodes to remain active for a long period is challenging and a high rate of turnover can lead to reliability issues. Network latency and capacity problems would also need to be addressed before distributed supercomputing can rise to prominence.

Ultimately, the difficulty in making firm predictions about zettascale lies in the massive chasm that separates present day workloads and HPC architectures from those that might exist in the future. From a contemporary perspective, it's fruitless to imagine what might be made possible by a computer so powerful.

We might imagine zettascale machines will be used to process workloads similar to those tackled by modern supercomputers, only more quickly. But it's possible - even likely - the arrival of zettascale computing will open doors that do not and cannot exist today, so extraordinary is the leap.

Replicating the human brain


In a future in which computers are 2,000+ times as fast as the most powerful machine today, philosophical and ethical debate surrounding the intelligence of man versus machine are bound to be played out in greater detail - and with greater consequence.

It is impossible to directly compare the workings of a human brain with that of a computer - i.e. to assign a FLOPS value to the human mind. However, it is not insensible to ask how many FLOPS must be achieved before a machine reaches a level of performance that might be loosely comparable to the brain.

Back in 2013, scientists used the K supercomputer to conduct a neuronal network simulation using open source simulation software NEST. The team simulated a network made up of 1.73 billion nerve cells connected by 10.4 trillion synapses.

While ginormous, the simulation represented only 1% of the human brain's neuronal network and took 40 minutes to replicate 1 second's worth of neuronal network activity.

However, the K computer reached a maximum computational power of only 10 petaFLOPS. A basic extrapolation (ignoring inevitable complexities), then, would suggest Fugaku could simulate circa 40% of the human brain, while a zettascale computer would be capable of performing a full simulation many times over.

Digital neuromorphic hardware (supercomputers created specifically to simulate the human brain) like SpiNNaker 1 and 2 will also continue to develop in the post-exascale future. Instead of sending information from point A to B, these machines will be designed to replicate the parallel communication architecture of the brain, sending information simultaneously to many different locations.

Modern iterations are already used to help neuroscientists better understand the mysteries of the brain and future versions, aided by advances in artificial intelligence, will inevitably be used to construct a faithful and fully-functional replica.

The ethical debates that will arise with the arrival of such a machine - surrounding the perception of consciousness, the definition of thought and what an artificial uber-brain could or should be used for - are manifold and could take generations to unpick.

The inability to foresee what a zettascale computer might be capable of is also an inability to plan for the moral quandaries that might come hand-in-hand.

Whether a future supercomputer might be powerful enough to simulate human-like thought is not in question, but whether researchers should aspire to bringing an artificial brain into existence is a subject worthy of discussion.


Tags

jrDiscussion - desc
[]
 
TᵢG
Professor Principal
1  seeder  TᵢG    4 years ago

One of the most complex computing tasks is simulating a single molecule.   There are just so many moving parts, the very best modern computers have been able to simulate the quantum mechanical behavior of only the most basic of molecules.    As we progress, we should start unlocking some amazing secrets that have until now been out of our reach simply because we could not get past the complexity.   This is true for the very small quantum level to the very large cosmos level.

 
 
 
Account Deleted
Freshman Silent
1.1  Account Deleted  replied to  TᵢG @1    4 years ago

Asimov - where are you when we need you.

It is also possible that the decentralized model might be the fastest route to achieving zettascale, with millions of less powerful devices working in unison to form a collective supercomputer more powerful than any single machine (as put into practice by the SETI Institute).

Interesting how we cycle back and forth between centralization and decentralization - both in computer systems and in business management models.

Most econ books discuss the failures of the communist command central planning board - how they were inferior to the market system - leading to shortages and surpluses.

Think of an economic system with a commuter of this power, as the "central planning board".

 
 
 
TᵢG
Professor Principal
1.1.1  seeder  TᵢG  replied to  Account Deleted @1.1    4 years ago
Interesting how we cycle back and forth between centralization and decentralization - both in computer systems and in business management models.

I agree.   But we are also scaling up in levels of sophistication as we do it.   Computing devices show this nicely.   We started with individual devices (such as the sliderule) and worked into tabulation machines.  We eventually got to a computer which was of course centralized.   Once costs came down the computer made it to the consumer market.   When LAN technology matured our individual computers networked together.   Once the internet emerged we had a mature decentralized paradigm.   But as maintenance and security issues mounted we moved back to server-centric computing (aka 'the cloud').   But, then again, advances in technology decoupled us from our land line phones and home computers and decentralized a ton of everyday processing.  

Think of an economic system with a commuter of this power, as the "central planning board".

Command economies have been far too complex to administer.   That is the main reason they failed.   Nowadays we are close to the point of having sufficient real-time information, sophisticated algorithms and the computing power to administer them.   A command economy (or, more accurately, a partial command economy) in the future might turn from a nightmare scenario (as in the past) into a superior method for controlling an economy.

 
 
 
Account Deleted
Freshman Silent
1.1.2  Account Deleted  replied to  TᵢG @1.1.1    4 years ago
A command economy (or, more accurately, a partial command economy) in the future might turn from a nightmare scenario (as in the past) into a superior method for controlling an economy.

The system would have access to all past purchases and present inventories and the financial standing of individuals. Very good estimates of demand could be made.

 
 
 
TᵢG
Professor Principal
1.1.3  seeder  TᵢG  replied to  Account Deleted @1.1.2    4 years ago
Very good estimates of demand could be made.

Agreed.   And in terms of algorithms, what you have just suggested enables machine learning.

Hey, it works for MLB.     jrSmiley_82_smiley_image.gif

 
 
 
Gordy327
Professor Guide
1.2  Gordy327  replied to  TᵢG @1    4 years ago

Very impressive computer. But can it calculate the meaning of life, the universe, and everything? jrSmiley_9_smiley_image.gif

 
 
 
sandy-2021492
Professor Expert
2  sandy-2021492    4 years ago
Science fiction coming soon so better brush up on the names of the ridiculously large numerical magnitudes.

Easier to just use powers of 10.

whether researchers should aspire to bringing an artificial brain into existence is a subject worthy of discussion.

Unfortunately, I don't know if that's something that will be adequately discussed before the technology exists.  Humans have a bad track record with regards to exploring the ethics that accompany advancement.

 
 
 
Buzz of the Orient
Professor Expert
3  Buzz of the Orient    4 years ago

Speaking as a person who learned to calculate change in my head while selling goods in my uncle's store where he had a vintage cash register, and having achieved the lowest final mark in Calculus in the history of my university (bound still to be a record), will these super-super-super computers eventually be able to determine the exact numerical equivalent of Pi?

 
 
 
TᵢG
Professor Principal
3.1  seeder  TᵢG  replied to  Buzz of the Orient @3    4 years ago

Nope, π is transcendental.   There is no exact numerical equivalent.

 
 
 
Buzz of the Orient
Professor Expert
3.1.1  Buzz of the Orient  replied to  TᵢG @3.1    4 years ago

Ah, if it's transcendental, then Maharishi Mahesh Yogi might be the person to ask. 

 
 
 
TᵢG
Professor Principal
3.1.2  seeder  TᵢG  replied to  Buzz of the Orient @3.1.1    4 years ago

A good point.  jrSmiley_82_smiley_image.gif

 
 
 
CB
Professor Principal
4  CB    4 years ago

These computing machines being discussed will out think humans. They will have no understanding for the value we see in say protests, wars, and/or fist-fighting. What do humans do when technically life has evolved to 'betterment' and humans lag behind due to making 'jealous' errors? (That is, when humanity decides it is too progressively learning and decides to drop 'anchors' along the digital landscape?

 
 
 
CB
Professor Principal
5  CB    4 years ago

Can a super-computer make a 'snap' judgment? If so, what is the higher value/point in creating machines in the image of humanity?

 
 
 
TᵢG
Professor Principal
5.1  seeder  TᵢG  replied to  CB @5    4 years ago
Can a super-computer make a 'snap' judgment? If so, what is the higher value/point in creating machines in the image of humanity?

Computers can make judgment calls today.   That is a function of software.   A judgment call is simply a multivariate calculation that is sufficiently complex to seem as though there is some thinking.

That is, we all are unimpressed when a computer determines that we are running short of a supply and need to reorder.

But we would be impressed if a computer determines that two dissimilar face shots are correctly deemed to be the same person from different views.

We would also be impressed if a computer can read a description of a legal case and author the brief.

And no doubt we are impressed when a computer hears a Jeopardy answer and analyzes its base of knowledge to find the most fitting question.

And imagine the judgment required to play the most complex strategic game on the planet (Go) and beat the best human master.

All of these are working today.

Now if by 'snap' judgment you mean rough analysis that is done quickly by not considering all the available factors, then yes a computer can certainly do that.   This actually can parallel the human amygdala.   Imagine a computer security bot whose job is to detect 'tells' of a potential security violation.   That is a snap judgment.   But in this case the snap judgment is used to deliver candidates for further consideration (full judgment).

They will have no understanding for the value we see in say protests, wars, and/or fist-fighting.

AI software could indeed have an understanding of the above. 

What do humans do when technically life has evolved to 'betterment' and humans lag behind due to making 'jealous' errors?

You mean something like having cyber courts where the judge and jury are actually well-informed AI actors running on supercomputers?  Hard to imagine the implications or even how that might evolve.   It is a great seed idea for a science fiction story.

 
 
 
CB
Professor Principal
5.1.1  CB  replied to  TᵢG @5.1    4 years ago

Excellent comment. 

Actually in the last quote of mine: "What do humans do when technically life has evolved to 'betterment' and humans lag behind due to making 'jealous' errors?" I mean, there may (will) come a time when supercomputers map out a path to a better society for the whole of humanity, and inevitably there will be push-back and dilemmas presented by humans in power as the "politics of the day" resist making change (from all sides in the then majority and minority).

Sorry for not being clear.

 
 
 
Bob Nelson
Professor Guide
6  Bob Nelson    4 years ago

Good article. 

We see the vast increase in computing power... which underscores the absence of software to exploit it. 

The article does a good job of inventorying the paths to be explored, while situating us close to the start of most of them. 

Update next year? 

 
 
 
TᵢG
Professor Principal
6.1  seeder  TᵢG  replied to  Bob Nelson @6    4 years ago
We see the vast increase in computing power... which underscores the absence of software to exploit it. 

We have all sorts of algorithms on the shelf ready for more powerful machines.   To give an historical example, the concept of neural-networks (a key method in modern AI) was formalized in the 1940s but was unable to amass the data and computing power until this century to make it practical.   Now we see this being applied everywhere from natural language recognition to game strategies and tactics in modern sports like baseball.

 
 
 
CB
Professor Principal
6.1.1  CB  replied to  TᵢG @6.1    4 years ago
We have all sorts of algorithms on the shelf ready for more powerful machines. 

Such systems in the hands of humanity; we are yes immature in character. I agree that this is frightening, because people simply won't grow "up" to meet the needs of a progressive society.

what is startling about this is all this computing power is not intended or designed to be for computing sake per se; it is intended for humanity's sake. And yet, people simply are not willing and able to be 'fit' for this.

 
 
 
TᵢG
Professor Principal
6.1.2  seeder  TᵢG  replied to  CB @6.1.1    4 years ago

I am referring to algorithms that simulate the universe, that simulate chemical compounds, etc.   There is so much cool stuff that we could do to give us insight into how nature works.   We have the mathematics but we lack the data and mostly the computational power to do the calculations required by the mathematics.

 
 
 
CB
Professor Principal
6.1.3  CB  replied to  TᵢG @6.1.2    4 years ago

Growth and development, now that in and of itself is a great thing for humanity.

 
 
 
Ed-NavDoc
Professor Quiet
7  Ed-NavDoc    4 years ago

Skynet from "The Terminator" series of movies does not seem so farfetched after all!

 
 
 
Ender
Professor Principal
7.1  Ender  replied to  Ed-NavDoc @7    4 years ago

I was thing more like Eagle Eye.

With the facial recognition and the networks and cameras everywhere.

Having a system that actively monitors it all.

 
 
 
Ed-NavDoc
Professor Quiet
7.1.1  Ed-NavDoc  replied to  Ender @7.1    4 years ago

Very true. Forgot about that one.

 
 
 
Drakkonis
Professor Guide
7.1.2  Drakkonis  replied to  Ender @7.1    4 years ago
Having a system that actively monitors it all.

Which is likely the goal for most governments. China is pushing it's tech in this area already, monitoring every aspect of it's citizens that it can. 

 
 
 
TᵢG
Professor Principal
7.1.3  seeder  TᵢG  replied to  Drakkonis @7.1.2    4 years ago
China is pushing it's tech in this area already, monitoring every aspect of it's citizens that it can. 

And they will succeed.    I am not in favor of this, but they have everything they need to accomplish über big brother.

 
 
 
CB
Professor Principal
7.1.4  CB  replied to  TᵢG @7.1.3    4 years ago

Why? Who really wants to aspire to live from cradle to grave in an all-consuming 'fish bowl' of omni-technology? So sad. Please God don't let it happen here to us.

 
 
 
TᵢG
Professor Principal
7.1.5  seeder  TᵢG  replied to  CB @7.1.4    4 years ago
Why?

Why what?

 
 
 
CB
Professor Principal
7.1.6  CB  replied to  TᵢG @7.1.5    4 years ago

Just wondering. The ultimate achievement for a communist nation or totalitarian state is to know mostly everything about nearly everything that goes on in its territories. Horrible existence that can be. Makes me wish 'neer-do-wells" well, "well."

 
 
 
TᵢG
Professor Principal
7.1.7  seeder  TᵢG  replied to  CB @7.1.6    4 years ago

Authoritarian states want to know everything and control everything.   China is politically and technologically positioned to increase the power of its authoritarian rule.   Not a good thing, but I do not see China making any changes towards democracy any time soon.  The good news is that China continues to move away from the brutality of the former USSR and China under Mao Zedong.

 
 
 
Nerm_L
Professor Expert
8  Nerm_L    4 years ago

Speed won't overcome the problem of significant figures.  Errors can only propagate faster.

Consider that the natural evolution of humans have already established decentralized processing as a means for accommodating errors and refining results.  Human progress hasn't been accomplished by a single brain.  Complex problems are broken down into smaller chunks that are addressed by a large number of brains.  That's why peer review and independent validation are important activities.

The biggest challenge posed by supercomputers will be overcoming the human tendency to misinterpret and misapply results.  The machines may not lie because of their limited capability but that won't avoid the problem of machines making mistakes.  Modeling the human brain would ultimately create an artificial intelligence capable of lying; so AI wouldn't be any more trustworthy than humans.

 
 
 
TᵢG
Professor Principal
8.1  seeder  TᵢG  replied to  Nerm_L @8    4 years ago
Speed won't overcome the problem of significant figures.  Errors can only propagate faster.

Are you suggesting that supercomputers will just make errors faster?    I am not following your point.

Modeling the human brain would ultimately create an artificial intelligence capable of lying; so AI wouldn't be any more trustworthy than humans.

Future computer is not trying to model the human brain.   Neural networks, for example, do not actually model the neurons and synapses and all the many complex chemical reactions taking place.   They instead work on a very rough abstraction which consists of neurons (mechanisms that can accept input, realize a threshold and produce an output) and synapses (links that connect neurons in one layer to neurons in an adjacent layer).   This is extremely simple compared to our brains and is basically borrowing from biology the essence of a cool computational scheme that is capable of accurately (good but with an error tolerance) solving pattern-recognition problems involving incomplete information.

 
 
 
Nerm_L
Professor Expert
8.1.1  Nerm_L  replied to  TᵢG @8.1    4 years ago
Are you suggesting that supercomputers will just make errors faster?    I am not following your point.

Certainly a possibility.  Computing speed won't overcome imprecision in the data, bad assumptions, or misinterpretation of correlations.  Computing speed cannot fill the gaps of the unknown.  The danger is believing that the machine will be objective (due to the nature of mathematics); therefore, the computed results are objective. 

Future computer is not trying to model the human brain.   Neural networks, for example, do not actually model the neurons and synapses and all the many complex chemical reactions taking place.   They instead work on a very rough abstraction which consists of neurons (mechanisms that can accept input, realize a threshold and produce an output) and synapses (links that connect neurons in one layer to neurons in an adjacent layer).   This is extremely simple compared to our brains and is basically borrowing from biology the essence of a cool computational scheme that is capable of accurately (good but with an error tolerance) solving pattern-recognition problems involving incomplete information.

Then the Turing test has no validity.  Pointing out that the machines are not modelling a biochemical computation engine is accurate.  The machines are not modelling an analog computation engine, either.  The physical structure of the device and its operating requirements are separate issues from the intended purpose of the device.

The machines are being used to model function; not form.  Of course the machine is not modelling the form of the brain; that's a rather obvious statement.  But modelling the function of the brain would include ability to make assumptions, arrive at erroneous conclusions, and include a capability for deception.  Claiming that the machines would eliminate the flaws found in functioning of the brain really suggests that the machine cannot model the brain's function, either.  The Turing test would be an invalid metric.

 
 
 
TᵢG
Professor Principal
8.1.2  seeder  TᵢG  replied to  Nerm_L @8.1.1    4 years ago
Certainly a possibility. 

That has been true since the first computer.   Seems like a cynical view of computing since correct calculations will also take place faster.

Then the Turing test has no validity. 

???    The Turing test does not test fidelity to the engineering of the human brain;  rather it tests the ability to mimic a mature, experienced human mind as perceived by an observer.

But modelling the function of the brain would include ability to make assumptions, arrive at erroneous conclusions, and include a capability for deception.  Claiming that the machines would eliminate the flaws found in functioning of the brain really suggests that the machine cannot model the brain's function, either.  The Turing test would be an invalid metric.

You simply stated that it is impossible to model the brain perfectly without carrying over its computational flaws.   Well of course not;  an exact duplication of the brain would duplicate the good and the bad.  

Why you are continuing with this notion of trying to model the brain 100%?   I have been talking about how computer scientists continue to look at the brain for inspiration, but then implement algorithms based on ideas.   No attempt is made to produce an exact replica of the brain as a method of computing.

 
 
 
CB
Professor Principal
8.1.3  CB  replied to  TᵢG @8.1    4 years ago

Good answer. It helps add clarity to a thought I was forming.

 
 
 
evilone
Professor Guide
9  evilone    4 years ago

Good article - nascent AI is all over the place these days and I'm not sure if it's the AI market driving hardware or hardware driving new AI. Just this morning I was watching video on 2 different products - photography (Arsenal) and aquariums (Felix Smart Aquarium) using new AI systems. As more people use the systems the AI will improve and more computing power will be needed. Interesting times if enough people sign on to the technology. I have the original Arsenal, but I didn't see how it was better than what I could do personally and rarely use it. I do see see a benefit to the Felix Smart Aquarium system, but the cost is prohibitive for me personally at this time.

 
 

Who is online

Ronin2


506 visitors