Linux Foundation’s Blockchain consortium Hyperledger loses members, funding

Admittedly, 90% of Hyperledger projects are Proof-of-Stake – not even Blockchain at all.
And to call IBM’s “Fabric” contibution “Blockchain” is really a stretch.
That makes Hyperledger a hot-podge of stuff, but lacks clear focus.
Hopefully this focus will be tightened soon, if it really wants to stay relevant. — TJACK

 

NEW YORK (Reuters) – More than 15 members of blockchain consortium Hyperledger have either cut their financial support for the project or quit the group over the past few months, according to documents seen by Reuters.

Exchange operators CME Group and Deutsche Boerse have decided to downgrade their membership for the consortium starting at the end of January 2018, according to slides titled “member attrition” from a board meeting presentation held on Friday.

Led by the Linux Foundation, Hyperledger was launched in 2015 to develop blockchain technology for businesses. Blockchain, which first emerged as the system powering cryptocurrency bitcoin, is a shared record of data that is maintained by a network of computers on the internet.

CME Group and Deutsche Boerse were premier members of the group and will downgrade to a general membership.

Premier members are given board seats in the consortium and pay a fee of $250,000 a year. General memberships range from $5,000 to $50,000 based on the size of the companies, according to Hyperledger’s website.

Blockchain consortium R3 has also decided to downgrade its premier membership next year, according to the documents.

Spokespeople for CME Group and R3 confirmed the companies had downgraded their membership. Deutsche Boerse declined to comment.

Hyperledger executive director Brian Behlendorf said in a written statement that the group has seen “tremendous growth in membership” in 2017.

“We have seen some members who were part of the initial December 2015 cohort shift their spending priorities but remain members of the organization,” Behlendorf said. “We have seen others who never really engaged decide not to renew. This is normal and expected.”

Banks and other large corporations have been investing hundreds of millions of dollars in developing blockchain technology in the hopes it can help them simplify their costly record-keeping processes.

To speed up development many large companies have formed or joined industry groups including the Enterprise Ethereum Alliance and R3.

The weakening support for Hyperledger from some large members highlights how large firms have become more selective with their blockchain efforts as the technology matures. Earlier this year JP Morgan Chase & Co. left R3, following the departure of Goldman Sachs Group Inc, Banco Santander and others.

It comes amid an investing frenzy in cryptocurrencies and blockchain startups. The price of bitcoin hit a record of almost $18,000 on the Bitstamp exchange on Friday.

Despite the excitement, blockchain is not yet used to run any large scale projects

Hyperledger, which counts more than 180 members, of which 18 will be premier at the end of January 2018, released its first enterprise grade blockchain this year. Membership in Hyperledger also requires a separate membership with the Linux Foundation.

Disclaimer: Thomson Reuters, the parent group of Reuters, is a member of Hyperledger, R3 and the Enterprise Ethereum Alliance.

 

from: https://in.reuters.com/article/us-blockchain-consortium/blockchain-consortium-hyperledger-loses-members-funding-documents-idINKBN1E92O4

 

 

Researchers Reveal First-Ever Complete Quantum Chip Architecture

Researchers at the University of New South Wales have revealed an architectural structure that solves some of the stability issues that are facing quantum computing scientists, according to a recent report.

The new architecture, which the report compared in significance to landing a man on the Moon, utilizes currently available processors to organize how each ‘spin qubit’ is kept stable and interacts with those around it.

By building a grid of silicon transistors to control the spin and interaction of each qubit (qubits are the building block of a quantum computer), the researchers have been able to stabilize interactions between them, the sticking point of quantum computing to date. The author, Menno Veldhorst, says:

“By selecting electrodes above a qubit, we can control a qubit’s spin, which stores the quantum binary code of a 0 or 1. And by selecting electrodes between the qubits, two-qubit logic interactions, or calculations, can be performed between qubits.”

While the research moves the technology forward, the report indicates that there is still more to do to create a commercially viable technology.

Quantum Blockchain security

According to researchers at Carnegie Mellon, quantum computing could theoretically be used to break through the encryption mechanism of Blockchain technology, putting the security of the network at risk.

The risks associated with the quantum computing, however, are somewhat distant, potentially giving researchers time to build encryption ‘patches’ that will handle the quantum computing risks.

from: https://cointelegraph.com/news/breaking-researchers-reveal-first-ever-complete-quantum-chip-architecture

Ethereum ‘High Severity’ Browser Bug Could Put User Funds At Risk

Stick with the real thing, not the clones, to avoid such issues: PoW Blockchain — TJACK

 

Using ethereum browser Mist may put cryptocurrency private keys at risk, according to an Ethereum Foundation blog post published today.

The threat arises from a newly discovered vulnerability, which the blog post classifies as “high severity,” and impacts all existing versions of the browser. However, Mist browser compatible Ethereum Wallet is not affected, the post clarifies.

As a result, Mist users are urged to avoid “untrusted” websites, and to default to Ethereum Wallet to manage any funds.

The vulnerability stems from the underlying software framework, Electron. Electron’s delay in upgrading to correct known security issues has led to “an increasing potential attack surface as time passes,” the post’s author, Mist developer Everton Fraga, said.

As a result, Mist is considering migrating to a fork of Electron from Brave – named Muon – that has a more frequent release schedule.

In the post, Fraga stressed that Mist is still in beta mode, and users that engage with the browser do so without warranty.

He said:

“The Mist Browser beta is provided on an “as is” and “as available” basis and there are no warranties of any kind, expressed or implied, including, but not limited to, warranties of merchantability or fitness of purpose.”

The developer further described security as a “never-ending battle” in browser development, writing: “making a browser (an app that loads untrusted code) that handles private keys is a challenging task.”

Sponsored by the Ethereum Foundation, Mist is the most popular ethereum browser for browsing decentralized applications (dapps).

 

from: https://www.coindesk.com/ethereum-browser-bug-could-put-user-funds-at-risk/

 

 

 

Former dDoS & Malware ‘ButterFly’ Botmaster, ‘Darkode’ Founder is CTO of Hacked Slovenian Bitcoin Mining Firm ‘NiceHash’

NiceHash CTO Matjaž Škorjanc, as pictured on the front page of a recent edition of the Slovenian daily Delo.si

On Dec. 6, 2017, approximately USD $52 million worth of Bitcoin mysteriously disappeared from the coffers of NiceHash, a Slovenian company that lets users sell their computing power to help others mine virtual currencies. As the investigation into the heist nears the end of its second week, many Nice-Hash users have expressed surprise to learn that the company’s chief technology officer recently served several years in prison for operating and reselling a massive botnet, and for creating and running ‘Darkode,” until recently the world’s most bustling English-language cybercrime forum.

In December 2013, NiceHash CTO Matjaž Škorjanc was sentenced to four years, ten months in prison for creating the malware that powered the ‘Mariposa‘ botnet. Spanish for “Butterfly,” Mariposa was a potent crime machine first spotted in 2008. Very soon after, Mariposa was estimated to have infected more than 1 million hacked computers — making it one of the largest botnets ever created.

 

 

ButterFly Bot, as it was more commonly known to users, was a plug-and-play malware strain that allowed even the most novice of would-be cybercriminals to set up a global operation capable of harvesting data from thousands of infected PCs, and using the enslaved systems for crippling attacks on Web sites. The ButterFly Bot kit sold for prices ranging from $500 to $2,000.

Prior to his initial arrest in Slovenia on cybercrime charges in 2010, Škorjanc was best known to his associates as “Iserdo,” the administrator and founder of the exclusive cybercrime forum Darkode.

 

A message from Iserdo warning Butterfly Bot subscribers not to try to reverse his code.

 

On Darkode, Iserdo sold his Butterfly Bot to dozens of other members, who used it for a variety of illicit purposes, from stealing passwords and credit card numbers from infected machines to blasting spam emails and hijacking victim search results. Microsoft Windows PCs infected with the bot would then try to spread the disease over MSN Instant Messenger and peer-to-peer file sharing networks.

 

 

In July 2015, authorities in the United States and elsewhere conducted a global takedown of the Darkode crime forum, arresting several of its top members in the process. The U.S. Justice Department at the time said that out of 800 or so crime forums worldwide, Darkode represented “one of the gravest threats to the integrity of data on computers in the United States and around the world and was the most sophisticated English-speaking forum for criminal computer hackers in the world.”

Following Škorjanc’s arrest, Slovenian media reported that his mother Zdenka Škorjanc was accused of money laundering; prosecutors found that several thousand euros were sent to her bank account by her son. That case was dismissed in May of this year after prosecutors conceded she probably didn’t know how her son had obtained the money.

Matjaž Škorjanc did not respond to requests for comment. But local media reports state that he has vehemently denied any involvement in the disappearance of the NiceHash stash of Bitcoins.

In an interview with Slovenian news outlet Delo.si, the NiceHash CTO described the theft “as if his kid was kidnapped and his extremities would be cut off in front of his eyes.” A roughly-translated English version of that interview has been posted to Reddit.

According to media reports, the intruders were able to execute their heist after stealing the credentials of a user with administrator privileges at NiceHash. Less than an hour after breaking into the NiceHash servers, approximately 4,465 Bitcoins were transferred out of the company’s accounts.

A source close to the investigation told KrebsOnSecurity that the NiceHash hackers used a virtual private network (VPN) connection with a Korean Internet address, although the source said Slovenian investigators were reluctant to say whether that meant South Korea or North Korea because they did not want to spook the perpetrators into further covering their tracks.

CNN, Bloomberg and a number of other Western media outlets reported this week that North Korean hackers have recently doubled down on efforts to steal, phish and extort Bitcoins as the price of the currency has surged in recent weeks.

“North Korean hackers targeted four different exchanges that trade bitcoin and other digital currencies in South Korea in July and August, sending malicious emails to employees, according to police,” CNN reported.

Bitcoin’s blockchain ledger system makes it easy to see when funds are moved, and NiceHash customers who lost money in the theft have been keeping a close eye on the Bitcoin payment address that received the stolen funds ever since. On Dec. 13, someone in control of that account began transferring the stolen bitcoins to other accounts, according to this transaction record.

The NiceHash theft occurred as the price of Bitcoin was skyrocketing to new highs. On January 1, 2017, a single Bitcoin was worth approximately $976. By December 6, the day of the NiceHash hack, the price had ballooned to $11,831 per Bitcoin.

Today, a single Bitcoin can be sold for more than $17,700, meaning whoever is responsible for the NiceHash hack has seen their loot increase in value by roughly $27 million in the nine days since the theft.

In a post on its homepage, NiceHash said it was in the final stages of re-launching the surrogate mining service.

Your bitcoins were stolen and we are working with international law enforcement agencies to identify the attackers and recover the stolen funds. We understand it may take some time and we are working on a solution for all users that were affected.

“If you have any information about the attack, please email us at [email protected]. We are giving BTC rewards for the best information received. You can also join our community page about the attack on reddit.

However, many followers of NiceHash’s Twitter account said they would not be returning to the service unless and until their stolen Bitcoins were returned.

from: https://krebsonsecurity.com/2017/12/former-botmaster-darkode-founder-is-cto-of-hacked-bitcoin-mining-firm-nicehash/

 

 

App Stores of Future Will Likely Be Based on Blockchain, And Promote Transparency

For years, Google and Apple have enjoyed an effective duopoly on apps,
earning huge fees and implementing opaque policies. Blockchain is poised to change all this.

 

In 2017, app stores saw record revenues and downloads. Across both iOS and Android platforms, downloads grew by 15 percent year-over-year to reach nearly 50 billion worldwide, taking total revenue to $15 billion.

Google Play and the Apple App Store are the most widely used mobile marketplaces across the globe. It’s because of this duopoly that they are able to impose a 30% cut on all purchases, app downloads and in-app items alike.

AppCoin wants to change this by utilising Blockchain technology, disrupting the traditional app economy. This economy is worth nearly $77 bln today and may double by 2020.

What problems will it solve?

The app economy is highly inefficient due to many intermediaries between the users and developers. It’s constructed by single entities that provide highly centralised platforms. These platforms offer distribution, discovery and financial transactions but have frustratingly non-transparent policies.

These entities are also responsible for maintaining the apps and games on the store, but also enforcing security measures and guaranteeing the integrity of every transaction made. Just like many other centralised systems, this model is flawed. Malware-plagued downloads, inaccessible in-app purchases for low-end markets, data leaks and privacy concerns are a few reasons why the user experience on mobile marketplaces are increasingly filled with risks and inconveniences.

In a nutshell, there are three crucial functions of app stores: advertising, in-app purchases and app approval. Aptoide wants to move these three functions to the Blockchain and revolutionise the mobile app industry with its supercharged app store protocol, AppCoins. AppCoins will act as a medium of exchange between end-users and developers, improving user experience and market efficiency through immutable smart contracts.

How will they be solved?

In order to make advertising more transparent and affordable, AppCoins will eliminate middlemen by creating a new method to acquire users that will make CPI (Cost per Installation) campaigns obsolete. They’re calling it CPAt (Cost Per Attention). This new model allows a developer to directly reward a user for spending at least two minutes inside the app. The user earns AppCoins, which are stored in their wallets, and can be used to make in-app purchases.

Blockchain will also help AppCoins’ team to reach the two billion smartphone users who do not have the necessary payment methods to do in-app purchases. With their new model, users will be rewarded by CPAt campaigns, by which they can earn AppCoins and spend them inside their favourite game or app.

With the use of Blockchain, app approvals will be made universal and more transparent through a developer reputation system. Their reputation will be validated by financial transactions on an auditable public ledger. A dispute system will be created so that AppCoins’ owners can create rankings for developers and the apps they publish.

The ICO

AppCoins is an ERC20 token that will be issued and used as the primary medium of exchange in the AppCoins protocol. App stores utilising this protocol will pay out 85% of the advertiser’s costs to the app users in the form of AppCoin tokens.

The pre-sale ended successfully, distributing over 20 million AppCoins. In total, including the pre-sale and public sale, 40 percent of the 700 million AppCoins tokens will be distributed.

The remaining tokens will go towards the App Store Foundation (15%), bootstrap strategies and key partnerships in the apps economy (20%), Aptoide (15%) and the key contributors to AppCoins idea (10%).

The token is valued at $0.10 with a hard cap of $28 million.

The Team

The Aptoide Android App Store was developed back in 2009 as a flexible, open and free alternative to Google Play Store which now has around 200 million users worldwide that downloaded more than four billion apps and games.

Conclusion

AppCoins is supported by Aptoide, one of the world’s most popular app stores. It is highly likely that with the elimination of intermediaries, developers will be able to realise greater returns on investment, increase the monetisation potential of their product, and directly communicate with their customer base.

For more information on the ICO, please visit https://appcoins.io. We also recommend reading the Whitepaper to further understand their roadmap and technology.

 

from: https://cointelegraph.com/news/app-stores-of-future-will-be-based-on-blockchain-promote-transparency

 

 

 

GCHQ’s cybersecurity accelerator just opened its door to nine new startups

GCHQ has showcased the nine startups chosen to help protect the UK from the cyber-attacks of the future.

Software designed to detect phishing emails, a platform to help developers write secure code, and a company which investigates cybercrime involving cryptocurrencies are just some of the ideas behind the startups that will join the second incarnation of GCHQ’s cyber-accelerator.

Showcased at a launch event at the National Cyber Security Centre in London, the nine companies will spend nine months working alongside the UK intelligence agency and business experts in a scheme designed to help protect the UK from hackers and cybercrime.

It’s the second intake of startups for the GCHQ cyber-accelerator in Cheltenham, which saw the number of applications to be a part of the scheme double compared with its first incarnation.

“There’s a bunch of problems out there and we want solutions to them. That’s what we hope we’ve got with the companies that we’ve selected for the accelerator,” said Chris Ensor, NCSC deputy director for cyber skills and growth.

Below are the nine companies selected to take part in the nine month accelerator programme and what they do:

  • Cybershield detects phishing and spear-phishing, and alerts employees before they mistakenly act on deceptive emails
  • Elliptic detects and investigates cybercrime involving cryptocurrencies, enabling the company to identify illicit blockchain activity and provide intelligence to financial institutions and law enforcement agencies
  • ExactTrak supplies embedded technology that protects data and devices, giving the user visibility and control even when the devices are turned off
  • Intruder provides a proactive security monitoring platform for internet-facing systems and businesses, detecting system weaknesses before hackers do
  • Ioetec provides a plug-and-play cloud service solution to connect Internet of Things devices with end-to-end authenticated, encrypted security
  • RazorSecure provides advanced intrusion and anomaly detection for aviation, rail and automotive sectors
  • Secure Code Warrior has built a hands-on, gamified Software-as-a-Service learning platform to help developers write secure code
  • Trust Elevate solves the problem of age verification and parental consent for young adults and children in online transactions
  • Warden helps businesses protect their users from hacks in real time by monitoring for suspicious activity

The nine startups will get support to help with the development of their products and even provide them with a potentially better route to market.

“We’re here to help. I know when someone from the government says ‘we’re here to help’, some people shudder, but genuinely you guys are the top applicants, so we want to help: whether it’s access to our people, our data, access to other people, technical help, whatever it is, we’ll do our best to help you,” said Dr Ian Levy, technical director at the NCSC.

“We need innovation, we need different ways of thinking about this stuff and I see things like the accelerator as our key part of our strategy to getting those solutions,” he added.

The intelligence organisation is famously secretive — a point hammered home by the razor wire around its Cheltenham headquarters — but it’s hoped that by working with startups, GCHQ can be more transparent while also learning at the same time.

“The razor wire is there to keep people out, but actually it does keep a good job of keeping people in — it creates an internal community and we wanted to try to break out of that. By creating something in Cheltenham, it encouraged people to get out from behind the barbed wire, go into town, visit the companies there and start to work with them,” said Ensor.

“Yes, we’re trying to put new capabilities and new ideas out, yes, we’re trying to help startups, but we’re also trying to work out how we work in a much more agile way,” he added.

The programme is again running in conjunction with Wayra, the startup accelerator scheme backed by Telefonica, and Ensor says they’ve been instrumental in helping GCHQ step out of the shadows.

“[They’ve] held our hands through our tentative first steps as we branched out from behind our razor wire,” he said.

Gary Stewart, director at Wayra, UK said: “What we’re doing with GCHQ is something that nobody else is in the world is doing. I don’t think there’s any other agency in the world which has opened itself up to work with early stage start-ups and help them to access investors and businesses when you’re a very early stage cybersecurity startup.”

The scheme — a collaboration between Department for Digital, Culture, Media, and Sport (DCMS), GCHQ, the NCSC, and Wayra UK — will run for nine months and those involved are set to receive help in all areas of business, including funding, office space, mentoring, and contact with an investor network.

 

from: http://www.zdnet.com/article/gchqs-cybersecurity-accelerator-just-opened-its-door-to-nine-new-start-ups/

 

 

 

[Chainium Pitch] Do-It-Yourself IPOs on the Blockchain

They use the term “Blockchain” very loosely, and actually refer to Ethereum as the platform …
Yet the general idea has validity (though better on the Blockchain, i.e. PoW) – thus re-published here — TJACK

 

The global equity market has seen a very turbulent decade since the start of the global recession. Yet despite the relatively stable economic growth the world has seen over the past five years there has been a significant decline in the number of Initial Public Offerings (IPOs), as well as a general lack of funding available to companies starting out. The market has been doing well, but new companies are not reaping the benefits.

This is despite the fact that there is a huge interest from the general public in investing. Apps like Robinhood have started to take the consumer market by storm, showing that young people have a genuine interest in investing to increase their wealth. This is backed up by the jaw-dropping interest in cryptocurrencies, with Bitcoin reaching all-time highs as of late and articles about cryptocurrencies appearing in all major media outlets.

So if the public is interested and the economy is doing well, what is standing in the way of companies accessing funding? The simple answer is the outdated funding system that is in place. High fees for the bureaucracy of VCs, banks, and private investor firms makes the whole process intransparent, expensive, and cumbersome. As well as this, investors are wary of financial institutions since the global recession.

The global equity market seems to be at an impasse, with plenty of funding available and plenty of willing companies, but structural hindrances are making deal flow congested. As with many industries, equity markets are driving more towards decentralization, and decentralization is exactly what Blockchain technology is good for. So it is no surprise that a new startup is trying to make the equities industry more amenable to fund-raising an investing. Chainium has been a work in progress for over two years.

Challenge: difficult to raise cash and difficult to invest

In many ways, the options for companies and investors are currently very limited; it is currently in a similar situation as the media industry was before the rise of freelancing sites like Elance or Upwork. There is no way to easily link up small private investors with growing companies, there is a lot of legal hurdles to running an IPO, and there are startup costs to be reckoned with. In this situation, it is completely unfeasible for peer-to-peer corporate financing for startups, so private equity firms, VCs, banks and IPOs on the stock market fill the void.

But as you can expect, these services come at a downside. The most notable is the cost involved where e.g. VCs charge a decent premium of the sale. The other downside is that a business has to pass the subjective judgement of the private investor in order to get access to cash. In an ideal world new companies would be able to go directly to their prospective end-investors with their value proposition.

Rendering middlemen obsolete

Chainium aims to make the process easier while also adding simplicity to the equation. Their main selling point is that they charge no fees but use Blockchain technology to handle the transparency of an equity sale. Their platform, underpinned by the CHX token, puts investors in direct contact with investors. As from that, the company summed up their killer advantage as:

“Business owners choose the terms on which they sell their equity and how actively they want to engage with their investors. A small family business could use Chainium to manage a simple private sale of an equity stake in order to ensure the sale is secure, legal and transparent. Alternatively, a larger firm could use Chainium to run a public equity offering and manage regulatory compliance, reporting, KYC/AML, investor communication, shareholder voting and all the other services required to manage a large investor base. Chainium removes the expensive intermediaries layer and enables access to equity fundraising directly from investors.”

Notably, the platform allows the same complete range of investor-company interaction that would be the case with traditional intermediaries, such as dividend issuance, shareholder voting, investor communications, financial reporting and analytics.

This is only possible with Blockchain technology since it offers a layer of distributed intermediate verifiability and guarantees that would be otherwise susceptible to fraud.

Simplicity in investing

The Chainium platform has been in development for a couple of years and the company has a functional version on its website, which should provide confidence to investors. They are hoping that the upcoming ICO in early 2018 will put them in the spotlight and bring their solution to the attention of those interested in the future of investments.

 

from: https://cointelegraph.com/news/do-it-yourself-ipos-on-a-blockchain

 

 

 

Hyperaware Machines: Will AI Ever Become Conscious, And How Would We Know if It Did?

11 DEC 2017

Forget about today’s modest incremental advances in artificial intelligence, such as the increasing abilities of cars to drive themselves.

Waiting in the wings might be a groundbreaking development: a machine that is aware of itself and its surroundings, and that could take in and process massive amounts of data in real time.

It could be sent on dangerous missions, into space or combat. In addition to driving people around, it might be able to cook, clean, do laundry – and even keep humans company when other people aren’t nearby.

A particularly advanced set of machines could replace humans at literally all jobs. That would save humanity from workaday drudgery, but it would also shake many societal foundations. A life of no work and only play may turn out to be a dystopia.

Conscious machines would also raise troubling legal and ethical problems. Would a conscious machine be a “person” under law and be liable if its actions hurt someone, or if something goes wrong?

To think of a more frightening scenario, might these machines rebel against humans and wish to eliminate us altogether? If yes, they represent the culmination of evolution.

As a professor of electrical engineering and computer science who works in machine learning and quantum theory, I can say that researchers are divided on whether these sorts of hyperaware machines will ever exist.

There’s also debate about whether machines could or should be called “conscious” in the way we think of humans, and even some animals, as conscious.

Some of the questions have to do with technology; others have to do with what consciousness actually is.

Is awareness enough?

Most computer scientists think that consciousness is a characteristic that will emerge as technology develops. Some believe that consciousness involves accepting new information, storing and retrieving old information and cognitive processing of it all into perceptions and actions.

If that’s right, then one day machines will indeed be the ultimate consciousness. They’ll be able to gather more information than a human, store more than many libraries, access vast databases in milliseconds and compute all of it into decisions more complex, and yet more logical, than any person ever could.

On the other hand, there are physicists and philosophers who say there’s something more about human behaviour that cannot be computed by a machine.

Creativity, for example, and the sense of freedom people possess don’t appear to come from logic or calculations.

Yet these are not the only views of what consciousness is, or whether machines could ever achieve it.

Quantum views

Another viewpoint on consciousness comes from quantum theory, which is the deepest theory of physics. According to the orthodox Copenhagen Interpretation, consciousness and the physical world are complementary aspects of the same reality.

When a person observes, or experiments on, some aspect of the physical world, that person’s conscious interaction causes discernible change.

Since it takes consciousness as a given and no attempt is made to derive it from physics, the Copenhagen Interpretation may be called the “big-C” view of consciousness, where it is a thing that exists by itself – although it requires brains to become real.

This view was popular with the pioneers of quantum theory such as Niels Bohr, Werner Heisenberg and Erwin Schrödinger.

The interaction between consciousness and matter leads to paradoxes that remain unresolved after 80 years of debate.

A well-known example of this is the paradox of Schrödinger’s cat, in which a cat is placed in a situation that results in it being equally likely to survive or die – and the act of observation itself is what makes the outcome certain.

The opposing view is that consciousness emerges from biology, just as biology itself emerges from chemistry which, in turn, emerges from physics. We call this less expansive concept of consciousness “little-C.”

It agrees with the neuroscientists’ view that the processes of the mind are identical to states and processes of the brain.

It also agrees with a more recent interpretation of quantum theory motivated by an attempt to rid it of paradoxes, the Many Worlds Interpretation, in which observers are a part of the mathematics of physics.

Philosophers of science believe that these modern quantum physics views of consciousness have parallels in ancient philosophy. Big-C is like the theory of mind in Vedanta – in which consciousness is the fundamental basis of reality, on par with the physical universe.

Little-C, in contrast, is quite similar to Buddhism. Although the Buddha chose not to address the question of the nature of consciousness, his followers declared that mind and consciousness arise out of emptiness or nothingness.

Big-C and scientific discovery

Scientists are also exploring whether consciousness is always a computational process. Some scholars have argued that the creative moment is not at the end of a deliberate computation.

For instance, dreams or visions are supposed to have inspired Elias Howe‘s 1845 design of the modern sewing machine, and August Kekulé’s discovery of the structure of benzene in 1862.

A dramatic piece of evidence in favour of big-C consciousness existing all on its own is the life of self-taught Indian mathematician Srinivasa Ramanujan, who died in 1920 at the age of 32.

His notebook, which was lost and forgotten for about 50 years and published only in 1988, contains several thousand formulas, without proof in different areas of mathematics, that were well ahead of their time.

Furthermore, the methods by which he found the formulas remain elusive. He himself claimed that they were revealed to him by a goddess while he was asleep.

The concept of big-C consciousness raises the questions of how it is related to matter, and how matter and mind mutually influence each other. Consciousness alone cannot make physical changes to the world, but perhaps it can change the probabilities in the evolution of quantum processes.

The act of observation can freeze and even influence atoms’ movements, as Cornell physicists proved in 2015. This may very well be an explanation of how matter and mind interact.

Mind and self-organising systems

It is possible that the phenomenon of consciousness requires a self-organising system, like the brain’s physical structure. If so, then current machines will come up short.

Scholars don’t know if adaptive self-organising machines can be designed to be as sophisticated as the human brain; we lack a mathematical theory of computation for systems like that.

Perhaps it’s true that only biological machines can be sufficiently creative and flexible. But then that suggests people should – or soon will – start working on engineering new biological structures that are, or could become, conscious.

 

Subhash Kak, Regents Professor of Electrical and Computer Engineering, Oklahoma State University.

This article was originally published by The Conversation. Read the original article.

 

from: http://www.sciencealert.com/will-artificial-intelligence-become-conscious-ai-alive

 

 

 

Lightning Layer: The Blockchain Scaling Tech You Really Should Know

“What is bitcoin? Can I buy, like, pizza with it?”

Asked by sports blogger Dave Portnoy in his inaugural video as a bitcoin investor, the comment cuts to the core of a truism about the network: while it’s been billed as a “digital currency,” it’s actually not all the useful for payments today. In short, you’re very unlikely to stumble on a bodega that accepts it (should you even want to spend it).

But that’s not to say that engineers aren’t working on addressing the issue.

That’s why one of the most talked about technologies currently in development for bitcoin is the Lightning Network.

Rather than updating bitcoin’s underlying software (which has proven to be a messy process), Lightning essentially adds an extra layer to the tech, one where transactions can be made more cheaply and quickly, but with, hypothetically, the same security backing of the blockchain.

Proposed as far back as 2015, Lightning has progressed gradually over the years, migrating from white paper, to prototype, to more advanced prototype.

It’s the most recent test, however, that has some looking forward to a not-so-distant future when users can at last transact via Lightning, putting to the test long-held assumptions and criticisms.

As Jack Mallers, developer of Lightning desktop app Zapp, put it:

“It’s fairly close to working to the point where the public can test with real money, but not necessarily at the point where people can operate a business on it quite yet.”

Step one, the technology

What steps are left before Lightning is usable? Lightning engineers have some ideas.

Though Lightning took a big step early this week, the engineers still need to release software with which real users can make real Lightning transactions. So, the first and most obvious step is to let Lightning out of the cage and to watch and see if users have any issues during this initial stage.

“In the near future most problems will be about getting Lightning to work in practice,” Swiss university ETH Zürich researcher Conrad Burchert told CoinDesk.

And, once Lightning’s up and running, engineers foresee other subtle technical challenges, such as getting the “network structure” right, Burchert said. Bad actors might be able to halt transactions, for example, or users might want more control over where their transactions are going.

“Whenever you’re building a new financial protocol, you want to ensure it’s secure as possible, so we’re working on various security-related efforts,” said Elizabeth Stark, co-founder and CEO of Lightning Labs, one of a handful of startups dedicated solely to the technology.

Mallers agreed that these technical hurdles need to be solved before Lightning can reach the mainstream.

“All of that will need to get ironed out before I would advise a company to start to rely on the Lightning Network for business or money that they can’t afford to lose,” Mallers said, adding:

“The only thing that could speed it up is more engineers.”

Stark concurred, adding that despite the promise of the technology, there are astoundingly few developers working on it right now.

“We need more hours in the day. … There are 10 or fewer full-time developers working across all implementations of Lightning. Getting more contributors and people building out the protocol would certainly help move things along,” she told CoinDesk.

Hiding the wires

Another piece of the puzzle is making the Lightning apps easy to use.

It’s promising that apps supporting Lightning as a payment method are already cropping up, but so far they’re pretty confusing to use. A lot of the wires are still popping out into view.

Zap, a Lightning desktop app, requires users to configure their node and plug in its IP address, for example, a far cry from today’s money apps that hide these technical details from users.

“That stuff will definitely be hidden one day,” said Mallers, envisioning that Zap will one day look closer to Venmo, an app for sending small amounts of money to friends. “Eventually peers on the network will just look like contacts on your phone.”

Maller argues this is happening already.

LND, the Lightning implementation most popular among app developers, for example, recently added a feature that automates creating a channel between the sender and receiver when users deposit money, “so that users don’t have to understand what all that means,” he said.

That’s not to say he thinks it will happen right away, though.

“Baby steps,” he continued. “Right now the Lightning Network still probably favors technical users. Slowly but surely we’ll abstract a lot of this stuff away, so it’s just about paying and receiving money.”

“As far as Lightning Network changing the world – where I can wave my phone and pay for things and stuff will show up – I’d say it’ll take a year or two,” Mallers said.

Chicken-and-egg issue

Then there’s the question: Will users actually want to use bitcoin? Even with faster, cheaper Lightning-like transactions in place?

Bitcoin developer Alphonse Pace believes it could be a challenge for Lightning to achieve a “network effect,” where users have an incentive to use the technology because other people are using it.

And, who will adopt it first?

“It’s a chicken-and-egg problem,” said Pace. “Wallets will want people wanting to use it to support it, and people will want wallets to support using it.”

Alex Bosworth, developer of Lightning apps HTLC.me and Yalls alluded to a similar problem.

“There is somewhat of a bootstrapping issue. We need to have apps to encourage wallets and wallets to encourage apps,” said Bosworth.

And, even if bitcoin transactions become faster and cheaper (because of Lightning) then familiar payment apps, such as Apple Pay, he thinks users will be cautious at first.

“If you ask a normal person what they want to pay with, they would probably go with Apple Pay because that’s what they are used to,” he said.

In conversation, Lightning developers expect to overcome these hurdles. But, again, they think it will take time.

High expectations

Although it might take time to iron out these issues, developers were mostly optimistic that Lightning would help achieve the dream of making bitcoin a usable payment system. Rome wasn’t built in a day, after all. And neither were computers or the internet, which each took decades to reach normal people.

Mallers argued Lightning “will really change the way that we send money to each other on a day to day basis.” But he thinks the community might have unreal expectations for how long it will take engineers to achieve that.

“[To these engineers,] I would hope you’d go ‘oh take your time, would you like any water?’ But the community seems to be like ‘Why’s it not here tomorrow?’ I think users over-estimated Lightning’s deadlines,” he said.

Bosworth offered a similarly optimistic take: “[The Lightning Network] could be like the WWW was to email. It might take a while to grow, but the more it grows the better it will get.”

He added that his dad, Adam Bosworth, led the tech team behind one of the first web browsers in 1995 [at Microsoft … – he is also known as one of the pioneers of XML technology; prior to Microsoft, Bosworth worked for Borland where he developed the Quattro spreadsheet application — TJACK], just as the internet was finally making its way out of research laboratories to normal people.

Bosworth said:

“I remember that time as being pretty exciting because of all of the opportunities that were going to fall out of web browsers. This reminds me a lot of that.”

 

from: https://www.coindesk.com/lightning-bitcoin-scaling-tech-really-know/

 

 

 

NVIDIA: AI Makes Fake Videos Which May Facilitate The End of ‘Reality’ as We Know It

What (whom) are we going to trust?

Anyone worried about the ability of artificial intelligences (AI) to mimic reality is likely to be concerned by Nvidia’s latest offering: an image translation AI that will almost certainly have you second-guessing everything you see online.

In October, Nvidia demonstrated the ability of one of their AIs to generate disturbingly realistic images of completely fake people. Now, the tech company has produced one that can generate fake videos.

The AI does a surprisingly decent job of changing day into night, winter into summer, and house cats into cheetahs (and vice versa).

Best (or worst?) of all, the AI does it all with much less training than existing systems.

 

 

Like Nvidia’s face-generating AI, this image translation AI makes use of a type of algorithm called a generative adversarial network (GAN).

In a GAN, two neural networks work together by essentially working against each other. One of the networks generates an image or video, while the other critiques its work.

Typically, a GAN requires a significant amount of labeled data to learn how to generate its own data. For example, a system would need to see pairs of images showing what a street looked like with and without snow in order to generate either image on its own.

However, this new image translation AI developed by Nvidia researchers Ming-Yu Liu, Thomas Breuel, and Jan Kautz can imagine what a snow covered version of a street would look like without ever actually seeing it.

Trusting Your Own Eyes

Liu told The Verge that the team’s research is being shared with Nvidia’s product teams and customers, and while he said he couldn’t comment on how quickly or to what extent the AI would be adopted, he did note that there are several interesting potential applications.

“For example, it rarely rains in California, but we’d like our self-driving cars to operate properly when it rains,” he said.

“We can use our method to translate sunny California driving sequences to rainy ones to train our self-driving cars.”

Beyond such practical applications, the tech could have whimsical ones as well. Imagine being able to see how your future home might look in the middle of winter when shopping for houses, or what a potential outdoor wedding location will look like in the fall when leaves blanket the ground.

That said, such technology could have nefarious uses as well. If widely adopted, our ability to trust any video or image based solely on what our eyes tell us would be greatly diminished.

 

(this is more simply doing inverse graphics – but the snow add-on above is scary — TJACK)

 

Video evidence could become inadmissible in court, and fake news could become even more prevalent online as real videos become indistinguishable from those generated by AI.

Of course, right now, the capabilities of this AI are limited to just a few applications, and until it makes its way into consumer hands, we have no way of telling how it will impact society as a whole.

For now, we can only hope that the potential of this tech incites discussions about AI and proper regulation because, clearly, the impact of AI is going reach well beyond the job market.

This article was originally published by Futurism. Read the original article.

 

from: http://www.sciencealert.com/nvidia-ai-translation-software-fake-videos

 

NVIDIA Introduces Fake People Generated by AI

Chipmaker NVIDIA has developed an AI that produces highly detailed images of human-looking faces, but the people depicted don’t actually exist. The system is the latest example of how AI is blurring the line between the “real” and the fabricated.

Better Than Doodles

Back in June, an image generator that could turn even the crudest doodle of a face into a more realistic looking image made the rounds online. That system used a fairly new type of algorithm called a generative adversarial network (GAN) for its AI-created faces, and now, chipmaker NVIDIA has developed a system that employs a GAN to create far most realistic-looking images of people.

Artificial neural networks are systems developed to mimic the activity of neurons in the human brain. In a GAN, two neural networks are essentially pitted against one another. One of the networks functions as a generative algorithm, while the other challenges the results of the first, playing an adversarial role.

As part of their expanded applications for artificial intelligence, NVIDIA created a GAN that used CelebA-HQ’s database of photos of famous people to generate images of people who don’t actually exist. The idea was that the AI-created faces would look more realistic if two networks worked against each other to produce them.

First, the generative network would create an image at a lower resolution. Then, the discriminator network would assess the work. As the system progressed, the programmers added new layers dealing with higher-resolution details until the GAN finally generated images of “unprecedented quality,” according to the NVIDIA team’s paper.

Human or Machine?

NVIDIA released a video of their GAN in action, and the AI-created faces are both absolutely remarkable and incredibly eerie. If the average person didn’t know the faces were machine-generated, they could easily believe they belonged to living people.

Indeed, this blurring of the line between the human and the machine-generated is a topic of much discussion within the realm of AI, and NVIDIA’s GAN isn’t the first artificial system to convincingly mimic something human.

 

 

A number of AIs use deep learning techniques to produce human-sounding speech. Google’s DeepMind has WaveNet, which can now copy human speech almost perfectly. Meanwhile, startup Lyrebird’s algorithm is able to synthesize a human’s voice using just a minute of audio.

Even more disturbing or fascinating — depending on your perspective on the AI debate — are AI robots that can supposedly understand and express human emotion. Examples of those include Hanson Robotics’ Sophia and SoftBank’s Pepper.

Clearly, an age of smarter machines is upon us, and as the ability to AI to perform tasks previously only human beings could do improves, the line between human and machine will continue to blur. Now, the only question is if it will eventually disappear altogether.

 

from: https://futurism.com/these-people-never-existed-they-were-made-by-an-ai/

 

 

Generative Adversarial Networks (GAN)
for Beginners

Build a neural network that learns to generate handwritten digits.

Practical Generative Adversarial Networks for Beginners

You can download and modify the code from this tutorial on GitHub here.

According to Yann LeCun, “adversarial training is the coolest thing since sliced bread.” Sliced bread certainly never created this much excitement within the deep learning community. Generative adversarial networks—or GANs, for short—have dramatically sharpened the possibility of AI-generated content, and have drawn active research efforts since they were first described by Ian Goodfellow et al. in 2014.

GANs are neural networks that learn to create synthetic data similar to some known input data. For instance, researchers have generated convincing images from photographs of everything from bedrooms to album covers, and they display a remarkable ability to reflect higher-order semantic logic.

Those examples are fairly complex, but it’s easy to build a GAN that generates very simple images. In this tutorial, we’ll build a GAN that analyzes lots of images of handwritten digits and gradually learns to generate new images from scratch—essentially, we’ll be teaching a neural network how to write.

 

 

Sample images from the generative adversarial network that we’ll build in this tutorial. During training, it gradually refines its ability to generate digits.

GAN architecture

Generative adversarial networks consist of two models: a generative model and a discriminative model.

The discriminator model is a classifier that determines whether a given image looks like a real image from the dataset or like an artificially created image. This is basically a binary classifier that will take the form of a normal convolutional neural network (CNN).

The generator model takes random input values and transforms them into images through a deconvolutional neural network.

Over the course of many training iterations, the weights and biases in the discriminator and the generator are trained through backpropagation. The discriminator learns to tell “real” images of handwritten digits apart from “fake” images created by the generator. At the same time, the generator uses feedback from the discriminator to learn how to produce convincing images that the discriminator can’t distinguish from real images.

Getting started

We’re going to create a GAN that will generate handwritten digits that can fool even the best classifiers (and humans too, of course). We’ll use TensorFlow, a deep learning library open-sourced by Google that makes it easy to train neural networks on GPUs.

This tutorial expects that you’re already at least a little bit familiar with TensorFlow. If you’re not, we recommend reading “Hello, TensorFlow!” or watching the “Hello, Tensorflow!” interactive tutorial on Safari before proceeding.

Loading MNIST data

We need a set of real handwritten digits to give the discriminator a starting point in distinguishing between real and fake images. We’ll use MNIST, a benchmark dataset in deep learning. It consists of 70,000 images of handwritten digits compiled by the U.S. National Institute of Standards and Technology from Census Bureau employees and high school students.

Let’s start by importing TensorFlow along with a couple of other helpful libraries. We’ll also import our MNIST images using a TensorFlow convenience function called read_data_sets.

 

 

The MNIST variable we created above contains both the images and their labels, divided into a training set called train and a validation set called validation. (We won’t need to worry about the labels in this tutorial.) We can retrieve batches of images by calling next_batch on mnist. Let’s load one image and look at it.

The images are initially formatted as a single row of 784 pixels. We can reshape them into 28 x 28 pixel images and view them using PyPlot.

 

If you run the cell above again, you’ll see a different image from the MNIST training set.

Discriminator network

Our discriminator is a convolutional neural network that takes in an image of size 28 x 28 x 1 as input and returns a single scalar number that describes whether or not the input image is “real” or “fake”—that is, whether it’s drawn from the set of MNIST images or generated by the generator.

 

 

The structure of our discriminator network is based closely on TensorFlow’s sample CNN classifier model. It features two convolutional layers that find 5×5 pixel features, and two “fully connected” layers that multiply weights by every pixel in the image.

To set up each layer, we start by creating weight and bias variables through tf.get_variable. Weights are initialized from a truncated normal distribution, and biases are initialized at zero.

tf.nn.conv2d() is TensorFlow’s standard convolution function. It takes 4 arguments. The first is the input volume (our 28 x 28 x 1 images in this case). The next argument is the filter/weight matrix. Finally, you can also change the stride and padding of the convolution. Those two values affect the dimensions of the output volume.

If you’re already comfortable with CNNs, you’ll recognize this as a simple binary classifier—nothing fancy.

 

Now that we have our discriminator defined, let’s take a look at the generator model. We’ll base the overall structure of our model on a simple generator published by Tim O’Shea.

You can think of the generator as a kind of reverse convolutional neural network. A typical CNN like our discriminator network transforms a 2- or 3-dimensional matrix of pixel values into a single probability. A generator, however, takes a d-dimensional vector of noise and upsamples it to become a 28 x 28 image. ReLU and batch normalization are used to stabilize the outputs of each layer.

In our generator network, we use three convolutional layers along with interpolation until a 28 x 28 pixel image is formed. (Actually, as you’ll see below, we’ve taken care to form 28 x 28 x 1 images; many TensorFlow tools for dealing with images anticipate that the images will have some number of channels—usually 1 for greyscale images or 3 for RGB color images.)

At the output layer we add a tf.sigmoid() activation function; this squeezes pixels that would appear grey toward either black or white, resulting in a crisper image.

 

Generating a sample image

Now we’ve defined both the generator and discriminator functions. Let’s see what a sample output from an untrained generator looks like.

We need to open a TensorFlow session and create a placeholder for the input to our generator. The shape of the placeholder will be None x z_dimensions. The None keyword means that the value can be determined at session runtime. We normally have None as our first dimension so that we can have variable batch sizes. (With a batch size of 50, the input to the generator would be 50 x 100). With the None keywoard, we don’t have to specify batch_size until later.

 

 

Now, we create a variable (generated_image_output) that holds the output of the generator, and we’ll also initialize the random noise vector that we’re going to use as input. The np.random.normal() function has three arguments. The first and second define the mean and standard deviation for the normal distribution (0 and 1 in our case), and the third defines the the shape of the vector (1 x 100).

 

 

Next, we initialize all the variables, feed our z_batch into the placeholder, and run the session.

The sess.run() function has two arguments. The first is called the “fetches” argument; it defines the value you’re interested in computing. In our case, we want to see what the output of the generator is. If you look back at the last code snippet, you’ll see that the output of the generator function is stored in generated_image_output, so we’ll use generated_image_output for our first argument.

The second argument takes a dictionary of inputs that are substituted into the graph when it runs. This is where we feed in our placeholders. In our example, we need to feed our z_batch variable into the z_placeholder that we defined earlier. As before, we’ll view the image by reshaping it to 28 x 28 pixels and show it with PyPlot.

 

 

That looks like noise, right? Now we need to train the weights and biases in the generator network to convert random numbers into recognizable digits. Let’s look at loss functions and optimization!

Training a GAN

One of the trickiest parts of building and tuning GANs is that they have two loss functions: one that encourages the generator to create better images, and the other that encourages the discriminator to distinguish generated images from real images.

We train both the generator and the discriminator simultaneously. As the discriminator gets better at distinguishing real images from generated images, the generator is able to better tune its weights and biases to generate convincing images.

Here are the inputs and outputs for our networks.

 

 

So, let’s first think about what we want out of our networks. The discriminator’s goal is to correctly label real MNIST images as real (return a higher output) and generated images as fake (return a lower output). We’ll calculate two losses for the discriminator: one loss that compares Dx and 1 for real images from the MNIST set, as well as a loss that compares Dg and 0 for images from the generator. We’ll do this with TensorFlow’s tf.nn.sigmoid_cross_entropy_with_logits() function, which calculates the cross-entropy losses between Dx and 1 and between Dg and 0.

sigmoid_cross_entropy_with_logits operates on unscaled values rather than probability values from 0 to 1. Take a look at the last line of our discriminator: there’s no softmax or sigmoid layer at the end. GANs can fail if their discriminators “saturate,” or become confident enough to return exactly 0 when they’re given a generated image; that leaves the discriminator without a useful gradient to descend.

The tf.reduce_mean() function takes the mean value of all of the components in the matrix returned by the cross-entropy function. This is a way of reducing the loss to a single scalar value, instead of a vector or matrix.

 

 

Now let’s set up the generator’s loss function. We want the generator network to create images that will fool the discriminator: the generator wants the discriminator to output a value close to 1 when it’s given an image from the generator. Therefore, we want to compute the loss between Dg and 1.

 

 

Now that we have our loss functions, we need to define our optimizers. The optimizer for the generator network needs to only update the generator’s weights, not those of the discriminator. Likewise, when we train the discriminator, we want to hold the generator’s weights fixed.

In order to make this distinction, we need to create two lists of variables, one with the discriminator’s weights and biases, and another with the generator’s weights and biases. This is where naming all of your TensorFlow variables with a thoughtful scheme can come in handy.

 

 

Next, we specify our two optimizers. Adam is usually the optimization algorithm of choice for GANs; it utilizes adaptive learning rates and momentum. We call Adam’s minimize function and also specify the variables that we want it to update—the generator’s weights and biases when we train the generator, and the discriminator’s weights and biases when we train the discriminator.

We’re setting up two different training operations for the discriminator here: one that trains the discriminator on real images and one that trains the discrmnator on fake images. It’s sometimes useful to use different learning rates for these two training operations, or to use them separately to regulate learning in other ways.

 

 

It can be tricky to get GANs to converge, and moreover they often need to train for a very long time. TensorBoard is useful for tracking the training process; it can graph scalar properties like losses, display sample images during training, and illustrate the topology of the neural networks.

If you run this script on your own machine, include the cell below. Then, in a terminal window, run tensorboard --logdir=tensorboard/ and open TensorBoard by visiting http://localhost:6006 in your web browser.

 

 

And now we iterate. We begin by briefly giving the discriminator some initial training; this helps it develop a gradient that’s useful to the generator.

Then we move on to the main training loop. When we train the generator, we’ll feed a random z vector into the generator and pass its output to the discriminator (this is the Dg variable we specified earlier). The generator’s weights and biases will be updated in order to produce images that the discriminator is more likely to classify as real.

To train the discriminator, we’ll feed it a batch of images from the MNIST set to serve as the positive examples, and then train the discriminator again on generated images, using them as negative examples. Remember that as the generator improves its output, the discriminator continues to learn to classify the improved generator images as fake.

Because it takes a long time to train a GAN, we recommend not running this code block if you’re going through this tutorial for the first time. Instead, follow along but then run the following code block, which loads a pre-trained model for us to continue the tutorial.

If you want to run this code yourself, prepare to wait: it takes about 3 hours on a fast GPU, but could take ten times that long on a desktop CPU.

 

 

Because it can take so long to train a GAN, we recommend that you skip the cell above and execute the following cell. It loads a model that we’ve already trained for several hours on a fast GPU machine, and lets you experiment with the output of a trained GAN.

 

 

Training difficulties

GANs are notoriously difficult to train. Without the right hyperparameters, network architecture, and training procedure, the discriminator can overpower the generator, or vice-versa.

In one common failure mode, the discriminator overpowers the generator, classifying generated images as fake with absolute certainty. When the discriminator responds with absolute certainty, it leaves no gradient for the generator to descend. This is partly why we built our discriminator to produce unscaled output rather than passing its output through a sigmoid function that would push its evaluation toward either 0 or 1.

In another common failure mode, known as mode collapse, the generator discovers and exploits some weakness in the discriminator. You can recognize mode collapse in your GAN if it generates many very similar images regardless of variation in the generator input z. Mode collapse can sometimes be corrected by “strengthening” the discriminator in some way—for instance, by adjusting its training rate or by reconfiguring its layers.

Researchers have identified a handful of “GAN hacks” that can be helpful in building stable GANs.

Closing thoughts

GANs have tremendous potential to reshape the digital world that we interact with every day. The field is still very young, and the next great GAN discovery could be yours!

Other resources


This post is part of a collaboration between O’Reilly and TensorFlow. See our statement of editorial independence.

 

 

from: https://www.oreilly.com/learning/generative-adversarial-networks-for-beginners

 

 

 

By continuing to use this site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close