Author: VIE

  • Neuralink 2022 Update -Human Trials are coming

    Neuralink 2022 Update -Human Trials are coming

    Let’s get into the latest updates on Elon Musk’s futuristic brain implant company Neuralink. Elon has been talking a lot lately about Neuralink and some of the applications that he expects it will be capable of, or not capable of, in the first decade or so of the product life cycle.

    We know that Elon has broadly promised that Neuralink can do everything from helping people with spinal cord injuries, to enabling telepathic communication, curing brain disease like Parkinsons and ALS, allowing us to control devices with our thoughts and even merging human consciousness with artificial intelligence.

    But as we get closer to the first clinical human trials for Neuralink, things are starting to become a little more clear on what this Brain Computer Interface technology will actually do, and how it will help people. So, let’s talk about what’s up with Neuralink in 2022.

    Neuralink Human Trials 2022

    When asked recently if Neuralink was still on track for their first human trial by the end of this year, Elon Musk replied by simply saying, “Yes.” Which I think is a good sign. It does seem like whenever Elon gives an abrupt answer like this, it means that he is confident about what he’s saying.

    For comparison, at around the same time last year, when asked about human trials of Neuralink, Elon wrote, “If things go well, we might be able to do initial human trials later this year.” Notice the significant difference in those two replies. Not saying this is a science or anything, but it is notable.

    We also saw earlier this year that Neuralink were looking to hire both a Director and Coordinator for Clinical Trials. In the job posting, Neuralink says that The director will “work closely with some of the most innovative doctors and top engineers, as well as working with Neuralink’s first Clinical Trial participants.”

    We know that Neuralink have been conducting their surgical trials so far with a combination of monkeys and pigs. In their 2020 demonstration, Neuralink showed us a group of pigs who had all received Neuralink implants, and in some cases had also undergone the procedure to have the implant removed. Then in 2021, we were shown a monkey who could play video games without the need for a controller, using only his brain, which was connected with two Neuralink implants.

    Human trials with Neuralink would obviously be a major step forward in product development. Last year, Elon wrote that, “Neuralink is working super hard to ensure implant safety & is in close communication with the FDA.” Previously, during Neuralink events, he has said that the company is striving to exceed all FDA safety requirements, not just to meet them. In the same way that Tesla vehicles exceed all crash safety requirements, they actually score higher than any other car ever manufactured.

    What can Neuralink Do?

    As we get closer to the prospective timeline for human testing, Elon has also been dialing down a little more into what exactly Neuralink will be able to do in its first phase implementation. It’s been a little bit hard to keep track when Elon is literally talking about using this technology for every crazy thing that can be imagined – that Neuralink would make language obsolete, that it would allow us to create digital backups of human minds, that we could merge our consciousness with an artificial super intelligence and become ultra enhanced cyborgs.

    One of the new things that Elon has been talking about recently is treating morbid obesity with a Neuralink, which he brought up during a live TED Talk interview. Which is not something that we expected to hear, but it’s a claim that does seem to be backed up by some science. There have already been a couple of studies done with brain implants in people with morbid obesity, the implant transmitted frequent electric pulses into the hypothalamus region of the brain, which is thought to be driving an increase in appetite. It’s still too soon to know if that particular method is really effective, but it would be significantly less invasive than other surgeries that modify a patient’s stomach in hopes of suppressing their appetite.

    Elon followed up on the comment in a tweet, writing that it is “Certainly physically possible” to treat obesity through the brain. In the same post, Elon expanded on the concept, writing, “We’re working on bridging broken links between brain & body. Neuralinks in motor & sensory cortex bridging past weak/broken links in neck/spine to Neuralinks in spinal cord should theoretically be able to restore full body functionality.”

    Which is one of the more practical implementations of Neuralink technology that we are expecting to see. These electrical signals can be read in the brain by one Neuralink device, and then wirelessly transmitted through BlueTooth to a second Neuralink device that is implanted in a muscle group, where the signal from the brain is delivered straight into the muscles. This exact kind of treatment has been done before with brain implants and muscular implants, but it has always required the patient to have a very cumbersome set up with wires running through their body into their brain, and wires running out of their skull and into a computer. The real innovation of Neuralink is that it makes this all possible with very small implants that connect wirelessly, so just by looking at the patient, you would never know that they have a brain implant.

    Elon commented on this in another Tweet, writing, “It is an electronics, slash mechanical, slash software engineering problem for the Neuralink device that is similar in complexity level to smart watches – which are not easy!, plus the surgical robot, which is comparable to state-of-the art CNC machines.”

    So the Neuralink has more in common with an Apple Watch than it does with any existing Brain Computer Interface Technology. And it is only made possible by the autonomous robotic device that conducts the surgery, the electrodes that connect the Neuralink device into the brain cortex are too small and fine to be sewn by human hands.

    Elon touched on this in a response to being asked if Neuralink could cure tinnitus, a permanent ringing in the ears. Elon wrote, “Definitely. Might be less than 5 years away, as current version Neuralinks are semi-generalized neural read/write devices with about 1000 electrodes and tinnitus  probably needs much less than 1000.” He then added that, “Future generation Neuralinks will increase electrode count by many orders of magnitude.”

    This brings us back to setting more realistic expectations of what a Neuralink can and cannot do. It’s entirely possible that in the future, the device can be expanded to handle some very complex issues, but as it is today, the benefits will be limited. Recently a person Tweeted at Elon, asking, “I lost a grandparent to Alzheimers – how will Neuralink address the loss of memory in the human brain?” Elon replied to say, “Current generation Neuralinks can help to some degree, but an advanced case of Alzheimers often involves macro degeneration of the brain. However, Neuralinks should theoretically be able restore almost any functionality lost due *localized* brain damage from stroke or injury.”

    So, because those 1,000 electrodes can’t go into all areas of the brain all at once, Neuralink will not be effective against a condition that afflicts the brain as a whole. But those electrodes can be targeted on one particular area of damage or injury, and that’s how Neuralink will start to help in the short term, and this will be the focus of early human trials.

    During his TED Talk interview, Elon spoke about the people that reached out to him, wanting to participate in Neuralink’s first human trials. Quote, “The emails that we get at Neuralink are heartbreaking. They’ll send us just tragic stories where someone was in the prime of life and they had an accident on a motorcycle and now someone who’s 25 years old can’t even feed themselves. This is something we could fix.” End quote.

    In a separate interview with Business Insider that was done in March, Elon talked more specifically about the Neuralink timeline, saying, “Neuralink in the short term is just about solving brain injuries, spinal injuries and that kind of thing. So for many years, Neuralink’s products will just be helpful to someone who has lost the use of their arms or legs or has just a traumatic brain injury of some kind.”

    This is a much more realistic viewpoint than what we’ve seen from Elon in interviews of the past. On one episode of the Joe Rogan Podcast, Elon tried to claim that in 5 years from now language would become obsolete because everyone would be using Neuralink to communicate with a kind of digital telepathy. That could have just been the weed talking, but I’m hoping that the more realistic Elon’s messaging becomes, the closer we are getting to a real medical trial of the implant.

    And finally, the key to reaching a safe and effective human trial is going to be that robot sewing machine that threads the electrodes into the cortex.  Elon referred to it as being comparable to a CNC machine. Because as good as the chip itself might be, if we can’t have a reliable procedure to perform the implant, then nothing can move forward. The idea is that after a round section of the person’s skull is removed, this robot will come in and place the tiny wires into a very specific areas in the outer layer of the brain – these don’t go deep into the tissue, only a couple of millimeters is enough to tap into the neural network of electrical signals. In theory this can all be done in a couple of hours, while the patient is still conscious – they would get an anesthetic to numb their head, obviously, but they wouldn’t have to go under full sedation, and therefore could be in and out of the procedure in an afternoon. Very similar deal to laser eye surgery – a fast and automated method to accomplish a very complex medical task. 

    That’s what this Twitter user was referencing when he recently asked how close the new, version two of the Neuralink robot was to inserting the chip as simply as a LASIK procedure. To which Elon responded, quote, “Getting there.”

    We know that the robot system is being tested on monkeys right now, and from what Elon says, it is making progress towards being suitable for human trials.

    The last interesting thing that Elon said on Twitter in relation to Neuralink was his comment, “No need for artificial intelligence, neural networks or machine learning quite yet.” He wrote these out as abbreviations, but these are all terms that we are well familiar with from Tesla and their autonomous vehicle program. We know that Elon is an expert in AI and he has people working for him at Tesla in this department that are probably the best in the world. This is a skill set that will eventually be applied at Neuralink, but to what end, we still don’t know.

  • The case for hybrid artificial intelligence

    The case for hybrid artificial intelligence

    Cognitive scientist Gary Marcus believes advances in artificial intelligence will rely on hybrid AI, the combination of symbolic AI and neural networks.

    Deep learning, the main innovation that has renewed interest in artificial intelligence in the past years, has helped solve many critical problems in computer vision, natural language processing, and speech recognition. However, as the deep learning matures and moves from hype peak to its trough of disillusionment, it is becoming clear that it is missing some fundamental components.

    This is a reality that many of the pioneers of deep learning and its main component, artificial neural networks, have acknowledged in various AI conferences in the past year. Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, the three “godfathers of deep learning,” have all spoken about the limits of neural networks.

    The question is, what is the path forward?

    At NeurIPS 2019, Bengio discussed system 2 deep learning, a new generation of neural networks that can handle compositionality, out of order distribution, and causal structures. At the AAAI 2020 Conference, Hinton discussed the shortcomings of convolutional neural networks (CNN) and the need to move toward capsule networks.

    But for cognitive scientist Gary Marcus, the solution lies in developing hybrid models that combine neural networks with symbolic artificial intelligence, the branch of AI that dominated the field before the rise of deep learning. In a paper titled “The Next Decade in AI: Four Steps Toward Robust Artificial Intelligence,” Marcus discusses how hybrid artificial intelligence can solve some of the fundamental problems deep learning faces today.

    Connectionists, the proponents of pure neural network–based approaches, reject any return to symbolic AI. Hinton has compared hybrid AI to combining electric motors and internal combustion engines. Bengio has also shunned the idea of hybrid artificial intelligence on several occasions.

    But Marcus believes the path forward lies in putting aside old rivalries and bringing together the best of both worlds.

    What’s missing in deep neural networks?

    The limits of deep learning have been comprehensively discussed. But here, I would like to generalization of knowledge, a topic that has been widely discussed in the past few months. While human-level AI is at least decades away, a nearer goal is robust artificial intelligence.

    Here’s how Marcus defines robust AI: “Intelligence that, while not necessarily superhuman or self-improving, can be counted on to apply what it knows to a wide range of problems in a systematic and reliable way, synthesizing knowledge from a variety of sources such that it can reason flexibly and dynamically about the world, transferring what it learns in one context to another, in the way that we would expect of an ordinary adult.”

    Those are key features missing from current deep learning systems. Deep neural networks can ingest large amounts of data and exploit huge computing resources to solve very narrow problems, such as detecting specific kinds of objects or playing complicated video games in specific conditions.

    However, they’re very bad at generalizing their skills. “We often can’t count on them if the environment differs, sometimes even in small ways, from the environment on which they are trained,” Marcus writes.

    Case in point: An AI trained on thousands of chair pictures won’t be able to recognize an upturned chair if such a picture was not included in its training dataset. A super-powerful AI trained on tens of thousands of hours of StarCraft 2 gameplay can play at championship level, but only under limited conditions. As soon as you change the map or the units in the game, its performance will take a nosedive. And it can’t play any game that is similar to StarCraft 2, such as Warcraft or Command & Conquer.

    AI AlphaStar StarCraft II
    A deep learning algorithm that plays championship-level StarCraft can’t play a similar game. It won’t even be able to maintain its level of gameplay if the settings are changed the slightest bit.

    The current approach to solve AI’s generalization problem is to scale the models: Create bigger neural networks, gather larger datasets, use larger server clusters, and train the reinforcement learning algorithms for longer hours.

    “While there is value in such approaches, a more fundamental rethink is required,” Marcus writes in his paper.

    In fact, the “bigger is better” approach has yielded modest results at best while creating several other problems that remain unsolved. For one thing, the huge cost of developing and training large neural networks is threatening to centralize the field in the hands of a few very wealthy tech companies.

    When it comes to dealing with language, the limits of neural networks become even more evident. Language models such as OpenAI’s GPT-2 and Google’s Meena chatbot each have more than a billion parameters (the basic unit of neural networks) and have been trained on gigabytes of text data. But they still make some of the dumbest mistakes, as Marcus has pointed out in an article earlier this year.

    “When sheer computational power is applied to open-ended domain—such as conversational language understanding and reasoning about the world—things never turn out quite as planned. Results are invariably too pointillistic and spotty to be reliable,” Marcus writes.

    What’s important here is the term “open-ended domain.” Open-ended domains can be general-purpose chatbots and AI assistants, roads, homes, factories, stores, and many other settings where AI agents interact and cooperate directly with humans. As the past years have shown, the rigid nature of neural networks prevents them from tackling problems in open-ended domains. In his paper, Marcus discusses this topic in detail.

    Why we need to combine symbolic AI and neural networks?

    Connectionists believe that approaches based on pure neural network structures will eventually lead to robust or general AI. After all, the human brain is made of physical neurons, not physical variables and class placeholders and symbols.

    But as Marcus points out in his essay, “Symbol manipulation in some form seems to be essential for human cognition, such as when a child learns an abstract linguistic pattern, or the meaning of a term like sister that can be applied in an infinite number of families, or when an adult extends a familiar linguistic pattern in a novel way that extends beyond a training distributions.”

    Marcus’ premise is backed by research from several cognitive scientists over the decades, including his own book The Algebraic Mind and the more recent Rebooting AI. (Another great read in this regard is the second chapter of Steven Pinker’s book How the Mind Works, in which he lays out evidence that symbol manipulation is an essential part of the brain’s functionality.)

    We already have proof that symbolic systems work. It’s everywhere around us. Our web browsers, operating systems, applications, games, etc. are based on rule-based programs. “The same tools are also, ironically, used in the specification and execution of virtually all of the world’s neural networks,” Marcus notes.

    Decades of computer science and cognitive science have proven that being able to store and manipulate abstract concepts is an essential part of any intelligent system. And that is why symbol-manipulation should be a vital component of any robust AI system.

    “It is from there that the basic need for hybrid architectures that combine symbol manipulation with other techniques such as deep learning most fundamentally emerges,” Marcus says.

    Examples of hybrid AI systems

    human brainNeuro-Symbolic Concept Learner, a hybrid AI system developed by researchers at MIT and IBM. The NSCL combines neural networks to solve visual question answering (VQA) problems, a class of tasks that is especially difficult to tackle with pure neural network–based approaches. The researchers showed that NCSL was able to solve the VQA dataset CLEVR with impressive accuracy. Moreover, the hybrid AI model was able to achieve the feat using much less training data and producing explainable results, addressing two fundamental problems plaguing deep learning.

    Google’s search engine is a massive hybrid AI that combines state-of-the-art deep learning techniques such as Transformers and symbol-manipulation systems such as knowledge-graph navigation tools.

    AlphaGo, one of the landmark AI achievements of the past few years, is another example of combining symbolic AI and deep learning.

    “There are plenty of first steps towards building architectures that combine the strengths of the symbolic approaches with insights from machine learning, in order to develop better techniques for extracting and generalizing abstract knowledge from large, often noisy data sets,” Marcus writes.

    The paper goes into much more detail about the components of hybrid AI systems, and the integration of vital elements such as variable binding, knowledge representation and causality with statistical approximation.

    “My own strong bet is that any robust system will have some sort of mechanism for variable binding, and for performing operations over those variables once bound. But we can’t tell unless we look,” Marcus writes.

    Lessons from history

    One thing to commend Marcus on is his persistence in the need to bring together all achievements of AI to advance the field. And he has done it almost single-handedly in the past years, against overwhelming odds where most of the prominent voices in artificial intelligence have been dismissing the idea of revisiting symbol manipulation.

    Marcus sticking to his guns is almost reminiscent of how Hinton, Bengio, and LeCun continued to push neural networks forward in the decades where there was no interest in them. Their faith in deep neural networks eventually bore fruit, triggering the deep learning revolution in the early 2010s, and earning them a Turing Award in 2019.

    It will be interesting to see where Marcus’ quest for creating robust, hybrid AI systems will lead to.


    Source

  • How to build a decentralized token bridge between Ethereum and Binance Smart Chain?

    How to build a decentralized token bridge between Ethereum and Binance Smart Chain?

    Conclusion

    The advent of blockchain bridges has made blockchain a more mainstream technology. Bridging solutions also aid the DeFi applications design that empowers the prospectus of a decentralized and financial system. By enabling connections between different blockchains or working together, blockchain bridges help users head towards the next-generation decentralized system. Thus, it aims to end the sovereignty of the centralized system from the business ecosystem. However, blockchain plans to bring about many new paradigms to reinvent the existing bridges and promote greater innovation and technological relevance.

     

    Blockchain technology keeps evolving, and it has been changed significantly since 2008 when Satoshi Nakamoto introduced the first cryptocurrency, Bitcoin, to the world. Bitcoin brought along blockchain technology. Since then, multiple blockchain platforms have been launched. Every blockchain has unique features and functionality to fill the gap between blockchain technology and its real-world implications. Notwithstanding the amazing benefits of the blockchain, such as its decentralized nature, the immutability of records, distributed ledger, and smart contract technology, a major hurdle still affects blockchain’s mass adoption, which is the lack of interoperability.

    Although public blockchains maintain transparency in the on-chain data, their siloed nature limits the holistic utilization of blockchain in decentralized finance and many other industries. Blockchains have unique capabilities that users often want to utilize together. However, that doesn’t seem possible since these blockchains work independently on their isolated ecosystem and abide by their own unique consensus rules. Independent blockchains can’t interact with each other to exchange information or value.

    This interoperability issue becomes critical due to the expanding blockchain networks and more DeFi projects going cross-chain. Meanwhile, such siloed nature of blockchain contradicts the core principle of decentralization, which revolves around making blockchain accessible for everyone. Is there any solution to this lack of interoperability? How can someone from the Ethereum network access the data and resources available on a different blockchain like Binance? That’s where bridging solutions or blockchain bridges make a move.

    Let’s explore the bridging solutions and their working mechanisms in this article. In addition, we will also learn to build a decentralized token bridge between Ethereum and Binance Smart Chain are two popular blockchains for DeFi development.

    What are blockchain bridges?

    A blockchain bridge enables interoperability and connectivity between two unique blockchains that operate under different consensus mechanisms. More clearly put, blockchain bridges allow two different blockchains to interact with each other. Blockchains can share smart contract execution instructions, transfer tokens, and share data & resources back and forth between two independent blockchains as they no longer remain limited by their origin. These blockchains can even access the off-chain data, such as access to the live chart of the stock market. Some of the widely used blockchain bridges are xPollinate, Matic Bridge, Binance Bridge. Blockchain bridges provide the following benefits to the users:

    • Users can leverage the benefits of two separate blockchains to create dApps instead of only from the hosted blockchain. It means a user can deploy dApp on Solana and can power the dApp with Ethereum’s smart contract technology.
    • Users can transfer tokens from a blockchain that charges high transaction costs to another blockchain where transaction costs are comparatively cheaper.
    • With the ability to transfer tokens instantly, users can shift from a volatile cryptocurrency to Stablecoins quickly without taking the help of an intermediary.
    • One can also host digital assets on a decentralized application of a different blockchain. For example, one can create NFTs on the Cardano blockchain and host them on the Ethereum marketplace.
    • Bridging allows users to execute dAPPs across multiple blockchain ecosystems.

    What are the Types of Blockchain Bridges?

    To understand how blockchain bridges work, we first need to know how many types exist. Currently, two types of blockchain bridges are present; a federated bridge and a trustless bridge. Now, let’s understand their working mechanism.

    Federated bridge

    A federated bridge is also known as a centralized bridge. It is essentially a kind of centralized exchange where the users interact with a pool that can sometimes be a company or a middleman. If the token transfer occurs for Ether and BNB, there will be two large pools; one containing BNB and another containing Ether. As soon as the sender initiates the transfer with Ether, it gets added to the pool, and the pool sends him an equivalent amount of BNB out of the second pool. The centralized authority charges a small fee to regulate this process. However, the fee is a small amount that users can pay conveniently.

    Trustless bridge

    These are the purely decentralized bridge that eliminates the role of any third party. Trustless blockchain bridges don’t even use API to administer the process of burning and minting the token. Instead, smart contract plays a key role here. When a user initiates the token transfer through the trustless bridge, the smart contract freezes his current cryptos and provides him a copy of equivalent tokens on the new network. The smart contract then mints the token since it understands that the user has already frozen or burnt tokens on another network.

    What are the main features of a bridging solution?

    Lock and Mint

    Tokens are not really transferred via a blockchain bridge. When a user transfers a token to another blockchain, a two-stage process takes place. At first, the tokens are frozen on the current blockchain. Then, a token of equal value is minted on the receiving blockchain. So, if the user wants to redeem the tokens, the bridge burns the equivalent token to unlock the original value.

    Trust-based Solution

    Trust-based decentralized blockchain bridges are popular even though they include a ‘merchant’ or trusted custodian. The custodian controls the fund (tokens) via wallet and helps ease off the token transfer process. Thus, high flexibility remains in many blockchain networks.

    Assisting Sidechain

    While a bridge links two different blockchains, a sidechain bridge connects a parent blockchain to its child blockchain. Since the parent and child blockchain exists on separate chains, they need a blockchain bridge to communicate or share data.

    Robust Management

    Bridge validators act as the network operators. These operators issue corresponding tokens in exchange for the token they receive from another network through a special smart contract.

    Cross-chain Collaterals

    Cross-chain collaterals help users to move assets from one blockchain of significant value to another with low fees. Earlier, the users were allowed to borrow assets only from their native chain. Now, they can leverage cross-chain borrowing through a blockchain bridge that requires additional liquidity.

    Efficiency

    Blockchain bridges authorize the regulation of spontaneous micro transfers. These transfers happen instantly between different blockchains at feasible and nominal rates.

    Why is a bridging solution needed?

    Following are the three big reasons a blockchain bridge or bridging solution is crucial:

    Multi-blockchain token transfer

    The most obvious yet crucial role of the blockchain bridge is that it enables cross-blockchain exchange. Users can instantly mint tokens on the desired blockchain without using any costly or time-taking exchange process.

    Development

    Blockchain bridges help various blockchains to develop by leveraging the abilities of each other. For instance, the features of Ethereum cannot be available on BSC. Bridging solutions let them work and grow together as a team player to solve the challenges occurring in the blockchain space.

    Transaction fees

    The last big reason behind someone’s need for a bridging solution is transaction fees, often high on popular blockchains. In contrast, newer blockchains don’t impose high transaction costs, though they lack security and other major features. So, bridges allow people to access new networks, transfer tokens to that network, and process transactions at a comparatively low cost.

    How to build a decentralized token bridge between Ethereum and Binance Smart Chain?

    Using this step-by-step procedure, you will learn how to build a completely decentralized bridge between Ethereum and Binance smart chain using the solidity programming language. Although many blockchain bridges use API to transfer tokens and information, APIs are vulnerable to hacks and can send bogus transactions once hacked. So, we will make the bridge fully decentralized by removing the API from the mechanism.

    We allow the bridge script to generate a signed message that the contract will receive to mint the tokens after verifying the signature. The contract also makes sure that the message is unique and hasn’t been used before. That way, you give the signed message to the user, and they are in charge of submitting it to the blockchain to mint and pay for the transaction.

    First set up a smart contract for the bridge base using the following functions

    import '@openzeppelin/contracts/token/ERC20/IERC20.sol';
    import './Itoken.sol';
    contract BridgeBase {
    address public admin;
    IToken public token;
    mapping(address => mapping(uint => bool)) public processedNonces;
    enum Step { Burn, Mint }
    event Transfer(
    address from,
    address to,
    uint amount,
    uint date,
    uint nonce,
    bytes signature,
    Step indexed step
    );
    constructor(address _token) {
    admin = msg.sender;
    token = IToken(_token);
    }
    function burn(address to, uint amount, uint nonce, bytes calldata signature) external {
    require(processedNonces[msg.sender][nonce] == false, 'transfer already processed');
    processedNonces[msg.sender][nonce] = true;
    token.burn(msg.sender, amount);
    emit Transfer(
    msg.sender,
    to,
    amount,
    block.timestamp,
    nonce,
    signature,
    Step.Burn
    );
    }
    function mint(
    address from,
    address to,
    uint amount,
    uint nonce,
    bytes calldata signature
    ) external {
    bytes32 message = prefixed(keccak256(abi.encodePacked(
    from,
    to,
    amount,
    nonce
    )));
    require(recoverSigner(message, signature) == from , 'wrong signature');
    require(processedNonces[from][nonce] == false, 'transfer already processed');
    processedNonces[from][nonce] = true;
    token.mint(to, amount);
    emit Transfer(
    from,
    to,
    amount,
    block.timestamp,
    nonce,
    signature,
    Step.Mint
    );
    }
    function prefixed(bytes32 hash) internal pure returns (bytes32) {
    return keccak256(abi.encodePacked(
    '\x19Ethereum Signed Message:\n32',
    hash
    ));
    }
    function recoverSigner(bytes32 message, bytes memory sig)
    internal
    pure
    returns (address)
    {
    uint8 v;
    bytes32 r;
    bytes32 s;
    (v, r, s) = splitSignature(sig);
    return ecrecover(message, v, r, s);
    }
    function splitSignature(bytes memory sig)
    internal
    pure
    returns (uint8, bytes32, bytes32)
    {
    require(sig.length == 65);
    bytes32 r;
    bytes32 s;
    uint8 v;
    assembly {
    // first 32 bytes, after the length prefix
    r := mload(add(sig, 32))
    // second 32 bytes
    s := mload(add(sig, 64))
    // final byte (first byte of the next 32 bytes)
    v := byte(0, mload(add(sig, 96)))
    }
    return (v, r, s);
    }
    }

    After constructing and deploying bridge base code, deploy Binance bridge using the following code

    pragma solidity ^0.8.0;
    import './BridgeBase.sol';
    contract BridgeBsc is BridgeBase {
    constructor(address token) BridgeBase(token) {}
    }

    Next, deploy another component of the decentralized token bridge; the Ethereum token bridge using the following code.

    pragma solidity ^0.8.0;
    import './BridgeBase.sol';
    contract BridgeEth is BridgeBase {
    constructor(address token) BridgeBase(token) {}
    }

    Once done with the contracts, mint and burn the IToken using the following code:

    pragma solidity ^0.8.0;
    interface IToken {
    function mint(address to, uint amount) external;
    function burn(address owner, uint amount) external;
    }

    Next, after minting and burning the IToken, program the migrations:

    // SPDX-License-Identifier: MIT
    pragma solidity >=0.4.22 <0.9.0;
    contract Migrations {
    address public owner = msg.sender;
    uint public last_completed_migration;
    modifier restricted() {
    require(
    msg.sender == owner,
    "This function is restricted to the contract's owner"
    );
    _;
    }
    function setCompleted(uint completed) public restricted {
    last_completed_migration = completed;
    }
    }

    Now, write the smart contract for the token base.

    pragma solidity ^0.8.0;
    import '@openzeppelin/contracts/token/ERC20/ERC20.sol';
    contract TokenBase is ERC20 {
    address public admin;
    constructor(string memory name, string memory symbol) ERC20(name, symbol) {
    admin = msg.sender;
    }
    function updateAdmin(address newAdmin) external {
    require(msg.sender == admin, 'only admin');
    admin = newAdmin;
    }
    function mint(address to, uint amount) external {
    require(msg.sender == admin, 'only admin');
    _mint(to, amount);
    }
    function burn(address owner, uint amount) external {
    require(msg.sender == admin, 'only admin');
    _burn(owner, amount);
    }
    }

    Once the token base is deployed, deploy the token on Binance smart chain using the given code:

    pragma solidity ^0.8.0;
    import './TokenBase.sol';
    contract TokenBsc is TokenBase {
    constructor() TokenBase('BSC Token', 'BTK') {}
    }

    Next, deploy the token on Ethereum using the given code:

    pragma solidity ^0.8.0;
    import './TokenBase.sol';
    contract TokenEth is TokenBase {
    constructor() TokenBase('ETH Token', 'ETK') {}
    }

    Once the token is deployed on Binance smart chain and Ethereum, we will program the migration function:

    const Migrations = artifacts.require("Migrations");
    module.exports = function (deployer) {
    deployer.deploy(Migrations);
    };

    Now, deploy the bridge between Ethereum and Binance smart chain.

    const TokenEth = artifacts.require('TokenEth.sol');
    const TokenBsc = artifacts.require('TokenBsc.sol');
    const BridgeEth = artifacts.require('BridgeEth.sol');
    const BridgeBsc = artifacts.require('BridgeBsc.sol');
    module.exports = async function (deployer, network, addresses) {
    if(network === 'ethTestnet') {
    await deployer.deploy(TokenEth);
    const tokenEth = await TokenEth.deployed();
    await tokenEth.mint(addresses[0], 1000);
    await deployer.deploy(BridgeEth, tokenEth.address);
    const bridgeEth = await BridgeEth.deployed();
    await tokenEth.updateAdmin(bridgeEth.address);
    }
    if(network === 'bscTestnet') {
    await deployer.deploy(TokenBsc);
    const tokenBsc = await TokenBsc.deployed();
    await deployer.deploy(BridgeBsc, tokenBsc.address);
    const bridgeBsc = await BridgeBsc.deployed();
    await tokenBsc.updateAdmin(bridgeBsc.address);
    }
    };

    Once the bridge is deployed, deploy the decentralized bridge:

    const TokenBsc = artifacts.require('./TokenBsc.sol');
    module.exports = async done => {
    const [recipient, _] = await web3.eth.getAccounts();
    const tokenBsc = await TokenBsc.deployed();
    const balance = await tokenBsc.balanceOf(recipient);
    console.log(balance.toString());
    done();
    }

    Next, program the bridge API that listens to the transfer events:

    const Web3 = require('web3');
    const BridgeEth = require('../build/contracts/BridgeEth.json');
    const BridgeBsc = require('../build/contracts/BridgeBsc.json');
    const web3Eth = new Web3('url to eth node (websocket)');
    const web3Bsc = new Web3('https://data-seed-prebsc-1-s1.binance.org:8545');
    const adminPrivKey = '';
    const { address: admin } = web3Bsc.eth.accounts.wallet.add(adminPrivKey);
    const bridgeEth = new web3Eth.eth.Contract(
    BridgeEth.abi,
    BridgeEth.networks['4'].address
    );
    const bridgeBsc = new web3Bsc.eth.Contract(
    BridgeBsc.abi,
    BridgeBsc.networks['97'].address
    );
    bridgeEth.events.Transfer(
    {fromBlock: 0, step: 0}
    )
    .on('data', async event => {
    const { from, to, amount, date, nonce, signature } = event.returnValues;
    const tx = bridgeBsc.methods.mint(from, to, amount, nonce, signature);
    const [gasPrice, gasCost] = await Promise.all([
    web3Bsc.eth.getGasPrice(),
    tx.estimateGas({from: admin}),
    ]);
    const data = tx.encodeABI();
    const txData = {
    from: admin,
    to: bridgeBsc.options.address,
    data,
    gas: gasCost,
    gasPrice
    };
    const receipt = await web3Bsc.eth.sendTransaction(txData);
    console.log(Transaction hash: ${receipt.transactionHash});
    console.log( Processed transfer: - from ${from} - to ${to} - amount ${amount} tokens - date ${date} - nonce ${nonce} );
    });

    Now, deploy the Private key function to the Ethereum bridge.

    const BridgeEth = artifacts.require('./BridgeEth.sol');
    const privKey = 'priv key of sender';
    module.exports = async done => {
    const nonce = 1; //Need to increment this for each new transfer
    const accounts = await web3.eth.getAccounts();
    const bridgeEth = await BridgeEth.deployed();
    const amount = 1000;
    const message = web3.utils.soliditySha3(
    {t: 'address', v: accounts[0]},
    {t: 'address', v: accounts[0]},
    {t: 'uint256', v: amount},
    {t: 'uint256', v: nonce},
    ).toString('hex');
    const { signature } = web3.eth.accounts.sign(
    message,
    privKey
    );
    await bridgeEth.burn(accounts[0], amount, nonce, signature);
    done();
    }

    At last, program Token balance function for the bridge:

    const TokenEth = artifacts.require('./TokenEth.sol');
    module.exports = async done => {
    const [sender, _] = await web3.eth.getAccounts();
    const tokenEth = await TokenEth.deployed();
    const balance = await tokenEth.balanceOf(sender);
    console.log(balance.toString());
    done();
    }

    To run the demo, follow the given steps:

    To deploy bridge smart contract on Ethereum, type this given code in the Ethereum test net

    ~ETB/code/screencast/317-eth-bsc-decenrealized-bridge $ truffle migrate --reset --network ethTestnet

    To deploy bridge smart contract on Binance smart chain, type this given code in the BSC testnet

    ~ETB/code/screencast/317-eth-bsc-decenrealized-bridge $ truffle migrate --reset --network bscTestnet
  • Are We Living in a Simulated Reality?

    Are We Living in a Simulated Reality?

     

    According to some theorists, we are living in a simulated reality. This theory is based on the idea that the world we experience is nothing more than a computer simulation. Furthermore, some scientists believe that an advanced civilization could create this simulation.

    We spend so much time inside computers and phones that it’s hard to imagine life without them. But what if we’re living in a simulated reality?

    Some people think that computers could be creating simulations of different worlds in which to play, while others believe that our entire reality could be just one extensive computer simulation.

    What is defined as Real?

    When discussing what is real, it’s important to define what is meant by the term. For some, the reality is what can be experienced through the five senses. Anything that exists outside of that is considered to be fake or simulated.

    Others may believe that reality is more than just what can be perceived with the senses. It may also include things that are beyond our understanding or knowledge.

    In the movie “The Matrix,” Morpheus asks Neo what is real. This is a question that people have asked throughout history. Philosophers have debated this question for centuries. What is real? Is it the physical world that we can see and touch? Or is it something else?

    What is real? How do you define ‘real’? If you’re talking about what you can feel, what you can smell, what you can taste and see, then ‘real’ is simply electrical signals interpreted by your brain.

    -Morpheus, The Matrix

     

    Some people believe that there is more to reality than what we can see and touch. They believe that a spiritual world exists beyond our physical world. Others believe that reality is nothing more than an illusion.

    There is no single answer to this question as it varies from individual to individual. What one person considers natural may not be seen as such by someone else. This makes it a difficult topic to debate or discuss.

    The Matrix: A movie or a Documentary?

    There is a lot of debate over whether the 1999 movie The Matrix is a work of fiction or a documentary.

    The Matrix is a movie based on the idea of simulated reality. It asks the question, what if our world is not what we think it is? What if we are living in a simulation? The movie takes this idea and runs it, creating a believable and fascinating world.

     

    However, some people believe that The Matrix is more than just a movie. They think that it is a documentary. Our world is a simulated reality, and we live in it without knowing it. While this may seem like a crazy idea, it does have some basis in science.

    Simulated reality is something that scientists are currently studying, and there is evidence that suggests it could be possible. So, while The Matrix may be a movie, it could also be based on reality exploring the idea of a simulated reality.

    The Simulation Theory

    The theory is that we might be living in a simulated reality. Proponents of the simulation theory say that it’s plausible because computing power increases exponentially.

    Why wouldn’t simulators do so if we could create a simulated world indistinguishable from reality?

    Some scientists even believe that we’re already living in a computer-generated simulation and that our consciousness is just a program or algorithm.

    Physicist creates AI algorithm that may prove reality is a simulation

    A theory suggests that we are all living in a simulated reality. This theory, known as the simulation theory, indicates that humans created a computer program that allows us to experience life as if we are living in the real world at some point in our history.

    Some people believe that this theory could explain the mysteries of our existence, such as why we are here and what happens when we die.

    The first time the simulation theory was proposed was by philosopher Rene Descartes in 1641. However, it wasn’t until the 1970s that the theory began to gain popularity. This was due to the development of computers and later artificial intelligence.

    Then, in 2003, philosopher Nick Bostrom published a paper titled “Are You Living in a Computer Simulation?” which revived interest in the theory.

    While there’s no definitive proof that we’re living in a simulation, the theory raises some interesting questions.

    What if everything we experience is just an illusion? What does that mean for our understanding of reality and ourselves?

    How could we know if we’re living in a simulation?

    There are a few different ways to determine whether or not we’re living in a simulation. One way is to look at the feasibility of creating a simulated world. If it’s possible to create a simulated world that is indistinguishable from the real world, we’re likely living in a simulation.

    Another way to determine if we’re living in a simulation is to look at the development of artificial intelligence. If artificial intelligence surpasses human intelligence and becomes able to create its simulations, then it’s likely that we’re living in a simulated world.

    Whether or not we live in a computer-generated simulation has been debated by philosophers and scientists for centuries. Still, recent advancements in artificial intelligence (AI) have brought the topic back into the spotlight.

    Some experts believe that if we create intelligent machines, they could eventually become powerful enough to create their simulations, leading to an infinite number of universes — including ours.

    So how could we know if we’re living in a simulation? One way would be to see if the laws of physics can be simulated on a computer. Another approach is to look for glitches or inaccuracies in the universe that could suggest it’s fake. However, both methods are complicated to execute and may not provide conclusive results.

    The bottom line is that we may never know whether or not we’re living in a simulation.

    Final Thought

    The likelihood of living in a simulated reality is still up for debate; the ramifications of such a possibility are far-reaching.

    If we were to find ourselves in a simulated world, it would force us to re-evaluate our understanding of reality and its meaning to being human. It would also raise important questions about the nature of existence and our place in the universe.

    Apr 18

    Source

  • Hybrid AI Will Go Mainstream in 2022

    Hybrid AI Will Go Mainstream in 2022

    Analysts predict an AI boom, driven by possibilities and record funding. While challenges remain, a hybrid approach combining the best of the realm may finally send it sailing into the mainstream.

    Artificial intelligence (AI) is becoming the dominant trend in data ecosystems around the world, and by all counts, it will accelerate as the decade unfolds. The more the data community learns about AI and what it can do, the faster it empowers IT systems and structures. This is primarily why IDC predicts the market to top $500 billion as early as 2024, with penetration across virtually all industries driving a wealth of applications and services designed to make work more effective. In fact, CB Insights Research reported that at the close of Q3 2021, funding for AI companies had already surpassed 2020 levels by roughly 55%, setting a global record for the fourth consecutive quarter.

    In 2022, we can expect AI to become better in solving practical problems that hamper unstructured language data-driven processes, thanks to improvements in complex cognitive tasks such as natural language understanding (NLU). At the same time, there will be increased scrutiny into how and why AI does what it does, such as ongoing efforts by the U.S. National Institutes of Standards and Technology (NIST) aimed at more explainable AI. This will require greater transparency into AI’s algorithmic functions without diminishing its performance or raising costs.

    You shall know a word by the company it keeps

    Of all the challenges that AI must cope with, understanding language is one of the toughest. While most AI solutions can crunch massive volumes of raw numbers or structured data in the blink of an eye, the multitude of meanings and nuances in language, based on the context they are in is another matter entirely. More often than not, words are contextual, which means they convey different understandings in different circumstances. Something easy and natural for our brains is not that easy for any piece of software.

     

    This is why the development of software that can interpret language correctly and reliably has become a critical factor in the development of AI across the board. Achieving this level of computational prowess would literally unleash the floodgates of AI development by allowing it to access and ingest virtually any kind of knowledge.

    NLU is a vital piece of this puzzle by virtue of its ability to leverage the wealth of language-based information. Language inhabits all aspects of enterprise activity, which means that an AI approach cannot be complete without extracting as much value as possible from this type of data.

    A knowledge-based, or symbolic AI approach, leverages a knowledge graph which is an open box. Its structure is created by humans and is understood to represent the real world where concepts are defined and related to each other by semantic relationships. Thanks to knowledge graphs and NLU algorithms, you can read and learn from any text, out-of-the-box, and gain a true understanding of how data is being interpreted and conclusions are being drawn from that interpretation. This is similar to how we as humans are able to create our own specific, domain-oriented knowledge, and it enables AI projects to link its algorithmic results to explicit representations of knowledge.

    In 2022, we should see a definitive shift toward this kind of AI approach combining both different techniques. Hybrid AI leverages different techniques to improve overall results and better tackle complex cognitive problems. Hybrid AI is an increasingly popular approach for NLU and natural language processing (NLP). Bringing together the best of AI-based knowledge or symbolic AI and learning models (machine learning, ML) is the most effective way to unlock the value of unstructured language data with the accuracy, speed and scale required by today’s businesses.

    Not only will the use of knowledge, symbolic reasoning and semantic understanding produce more accurate results and a more efficient, effective AI environment, it will also reduce the need for cumbersome and resource-intensive training, based on wasteful volumes of documents on expensive, high-speed data infrastructure. Domain-specific knowledge can be added through subject matter experts and/or machine learning algorithms leveraging the analysis of small and pinpointed training sets of data to produce highly accurate, actionable results quickly and efficiently. 

    The world of hybrid AI

    But why is this transition happening now? Why hasn’t AI been able to harness language-based knowledge previously? We have been led to believe that learning approaches can solve any of our problems. In some cases, they can, but just because ML does well with certain needs and specific contexts doesn’t mean it is always the best method. And we see this all too often when it comes to the ability to understand and process language. Only in the past few years have we seen significant advancements in NLU based on hybrid (or composite) AI approaches.

    Rather than throwing one form of AI, with its limited set of tools, at a problem, we can now utilize multiple, different approaches. Each can target the problem from a different angle, using different models, to evaluate and solve the issue in a multi-contextual way. And since each of these techniques can be evaluated independently of one another, it becomes easier to determine which ones deliver the most optimal outcomes.

    With the enterprise already having gotten a taste of what AI can do, this hybrid approach is poised to become a strategic initiative in 2022. It produces significant time and cost benefits, while boosting the speed, accuracy and efficiency of analytical and operational processes. To take just one example, the process of annotation is currently performed by select experts, in large part due to the difficulty and expense of training. By combining the proper knowledge repositories and graphs, however, the training can be vastly simplified so that the process itself can be democratized among the knowledge workforce.

    More to Come

    Of course, research in all forms of AI is ongoing. But we will see particular focus on expanding the knowledge graph and automating ML and other techniques because enterprises are under constant pressure to leverage vast amounts of data quickly and at low cost.

    As the year unfolds, we will see steady improvements in the way organizations apply these hybrid models to some of their most core processes. Business automation in the form of email management and search is already in sight. The current keyword-based search approach, for instance, is inherently incapable of absorbing and interpreting entire documents, which is why they can only extract basic, largely non-contextual information. Likewise, automation email management systems can rarely penetrate meaning beyond simple product names and other points of information. In the end, users are left to sort through a long list of hits trying to find the salient pieces of knowledge. This slows down processes, delays decision-making and ultimately hampers productivity and revenue.

    Empowering NLU tools with symbolic comprehension under a hybrid framework will give all knowledge-based organizations the ability to mimic the human ability to comprehend entire documents across their intelligent, automated processes.

    By , CTO at expert.ai on March 2, 2022 in Artificial Intelligence

  • What is Hybrid AI?

    What is Hybrid AI?

     

    Researchers are working to combine the strengths of symbolic AI and neural networks to develop Hybrid AI.

    As the research community makes progress in artificial intelligence and deep learning, scientists are increasingly feeling the need to move towards hybrid artificial intelligence. Hybrid AI is touted to solve fundamental problems that deep learning faces today. 

    Hybrid AI brings together the best aspects of neural networks and symbolic AI. Combining huge data sets (visual and audio, textual, emails, chat logs, etc.) allows neural networks to extract patterns. Then, rule-based AI systems can manipulate the retrieved information by using algorithms to manipulate symbols.

    Researchers are working to develop hybrid AI systems that can figure out simple abstract relations between objects and the reason behind them as effortlessly as a human brain. 

    What is symbolic AI?

    During the 1960s and 1970s, new technological advances were met with researchers’ increasing desire to understand how machines and nature interact. Researchers believed that using symbolic approaches would inevitably produce an artificially intelligent machine, which was seen as their discipline’s long-term goal.

    The “good old-fashioned artificial intelligence” or “GOFAI” was coined by John Haugeland in his 1985 book ‘Artificial Intelligence: The Very Idea‘ that explored artificial intelligence’s ethical and philosophical implications. Since the initial efforts to build thinking computers in the 1950s, research and development in the AI field have followed two parallel approaches: symbolic AI and connectionist AI. 

    Symbolic AI (also known as Classical AI) is an area of artificial intelligence research that focuses on attempting to express human knowledge clearly in a declarative form, that is, facts and rules. From the mid-1950s until the late 1980s, there was significant use of symbolic artificial intelligence. On the other hand, in recent years, a connectionist approach such as machine learning with deep neural networks has come to the forefront.

    Combining symbolic AI and neural networks 

     

    There has been a shift from the symbolic approach in the past few years due to its technical limits. 

    According to David Cox, IBM Director at MIT-IBM Watson AI Lab, deep learning and neural networks excel at the “messiness of the world,” but symbolic AI does not. Neural networks meticulously study and compare a large number of annotated instances to discover significant relationships and create corresponding mathematical models. 

    Several prominent IT businesses and academic labs have put significant effort into the use of deep learning. Neural networks and deep learning excel at tasks where symbolic AI fails. As a result, it’s being used to tackle complex challenges today. For example, deep learning has made significant contributions to the computer vision revolution with use cases in facial recognition and tuberculosis detection. Language-related activities have also benefited from deep learning breakthroughs.

    There are, however, certain limits to deep learning and neural networks. One argument is that the availability of large volumes of data depends on it. In addition, neural networks are also vulnerable to hostile instances, often known as adversarial data, which can manipulate an AI model’s behaviour in unpredictable and harmful ways.

    However, when combined with each other, symbolic AI and neural networks can form a good base for developing hybrid AI systems.

    Future of hybrid AI 

    The hybrid AI model utilises the neural network’s ability to process and evaluate unstructured data while also using symbolic AI techniques. Connectivist viewpoints argue that techniques based on neural networks will eventually provide sophisticated and broadly applicable AI. In 2019, International Conference on Learning Representations (ICLR) featured a paper in which the researchers combined neural networks with rule-based artificial intelligence to create an AI model. This approach has been called the “Neuro-Symbolic Concept Learner” (NCSL); it claims to overcome the difficulties AI faces and to be superior to the sum of its parts. NCSL, a hybrid system of AI developed by researchers at MIT and IBM tackles visual question answering (VQA) problems; the NSCL uses neural networks in conjunction with neural networks with remarkable accuracy. The researchers demonstrated that NCSL was able to handle the VQA dataset CLEVR. Even more important, the hybrid AI model could make outstanding achievements with less training data and overcome two long-standing deep learning challenges.

    Even Google search engine is a complex, all-in-one AI system made up of cutting-edge deep learning tools such as Transformers and advanced symbol manipulation tools like the knowledge graph.

    Source

Virtual Identity