Posted on Leave a comment

Are We Living in a Simulated Reality?

 

According to some theorists, we are living in a simulated reality. This theory is based on the idea that the world we experience is nothing more than a computer simulation. Furthermore, some scientists believe that an advanced civilization could create this simulation.

We spend so much time inside computers and phones that it’s hard to imagine life without them. But what if we’re living in a simulated reality?

Some people think that computers could be creating simulations of different worlds in which to play, while others believe that our entire reality could be just one extensive computer simulation.

What is defined as Real?

When discussing what is real, it’s important to define what is meant by the term. For some, the reality is what can be experienced through the five senses. Anything that exists outside of that is considered to be fake or simulated.

Others may believe that reality is more than just what can be perceived with the senses. It may also include things that are beyond our understanding or knowledge.

In the movie “The Matrix,” Morpheus asks Neo what is real. This is a question that people have asked throughout history. Philosophers have debated this question for centuries. What is real? Is it the physical world that we can see and touch? Or is it something else?

What is real? How do you define ‘real’? If you’re talking about what you can feel, what you can smell, what you can taste and see, then ‘real’ is simply electrical signals interpreted by your brain.

-Morpheus, The Matrix

 

Some people believe that there is more to reality than what we can see and touch. They believe that a spiritual world exists beyond our physical world. Others believe that reality is nothing more than an illusion.

There is no single answer to this question as it varies from individual to individual. What one person considers natural may not be seen as such by someone else. This makes it a difficult topic to debate or discuss.

The Matrix: A movie or a Documentary?

There is a lot of debate over whether the 1999 movie The Matrix is a work of fiction or a documentary.

The Matrix is a movie based on the idea of simulated reality. It asks the question, what if our world is not what we think it is? What if we are living in a simulation? The movie takes this idea and runs it, creating a believable and fascinating world.

 

However, some people believe that The Matrix is more than just a movie. They think that it is a documentary. Our world is a simulated reality, and we live in it without knowing it. While this may seem like a crazy idea, it does have some basis in science.

Simulated reality is something that scientists are currently studying, and there is evidence that suggests it could be possible. So, while The Matrix may be a movie, it could also be based on reality exploring the idea of a simulated reality.

The Simulation Theory

The theory is that we might be living in a simulated reality. Proponents of the simulation theory say that it’s plausible because computing power increases exponentially.

Why wouldn't simulators do so if we could create a simulated world indistinguishable from reality?

Some scientists even believe that we’re already living in a computer-generated simulation and that our consciousness is just a program or algorithm.

Physicist creates AI algorithm that may prove reality is a simulation

A theory suggests that we are all living in a simulated reality. This theory, known as the simulation theory, indicates that humans created a computer program that allows us to experience life as if we are living in the real world at some point in our history.

Some people believe that this theory could explain the mysteries of our existence, such as why we are here and what happens when we die.

The first time the simulation theory was proposed was by philosopher Rene Descartes in 1641. However, it wasn’t until the 1970s that the theory began to gain popularity. This was due to the development of computers and later artificial intelligence.

Then, in 2003, philosopher Nick Bostrom published a paper titled “Are You Living in a Computer Simulation?” which revived interest in the theory.

While there’s no definitive proof that we’re living in a simulation, the theory raises some interesting questions.

What if everything we experience is just an illusion? What does that mean for our understanding of reality and ourselves?

How could we know if we’re living in a simulation?

There are a few different ways to determine whether or not we’re living in a simulation. One way is to look at the feasibility of creating a simulated world. If it’s possible to create a simulated world that is indistinguishable from the real world, we’re likely living in a simulation.

Another way to determine if we’re living in a simulation is to look at the development of artificial intelligence. If artificial intelligence surpasses human intelligence and becomes able to create its simulations, then it’s likely that we’re living in a simulated world.

Whether or not we live in a computer-generated simulation has been debated by philosophers and scientists for centuries. Still, recent advancements in artificial intelligence (AI) have brought the topic back into the spotlight.

Some experts believe that if we create intelligent machines, they could eventually become powerful enough to create their simulations, leading to an infinite number of universes — including ours.

So how could we know if we’re living in a simulation? One way would be to see if the laws of physics can be simulated on a computer. Another approach is to look for glitches or inaccuracies in the universe that could suggest it’s fake. However, both methods are complicated to execute and may not provide conclusive results.

The bottom line is that we may never know whether or not we’re living in a simulation.

Final Thought

The likelihood of living in a simulated reality is still up for debate; the ramifications of such a possibility are far-reaching.

If we were to find ourselves in a simulated world, it would force us to re-evaluate our understanding of reality and its meaning to being human. It would also raise important questions about the nature of existence and our place in the universe.

Apr 18



Source

Posted on Leave a comment

Hybrid AI Will Go Mainstream in 2022

Analysts predict an AI boom, driven by possibilities and record funding. While challenges remain, a hybrid approach combining the best of the realm may finally send it sailing into the mainstream.

Artificial intelligence (AI) is becoming the dominant trend in data ecosystems around the world, and by all counts, it will accelerate as the decade unfolds. The more the data community learns about AI and what it can do, the faster it empowers IT systems and structures. This is primarily why IDC predicts the market to top $500 billion as early as 2024, with penetration across virtually all industries driving a wealth of applications and services designed to make work more effective. In fact, CB Insights Research reported that at the close of Q3 2021, funding for AI companies had already surpassed 2020 levels by roughly 55%, setting a global record for the fourth consecutive quarter.

In 2022, we can expect AI to become better in solving practical problems that hamper unstructured language data-driven processes, thanks to improvements in complex cognitive tasks such as natural language understanding (NLU). At the same time, there will be increased scrutiny into how and why AI does what it does, such as ongoing efforts by the U.S. National Institutes of Standards and Technology (NIST) aimed at more explainable AI. This will require greater transparency into AI’s algorithmic functions without diminishing its performance or raising costs.

You shall know a word by the company it keeps

Of all the challenges that AI must cope with, understanding language is one of the toughest. While most AI solutions can crunch massive volumes of raw numbers or structured data in the blink of an eye, the multitude of meanings and nuances in language, based on the context they are in is another matter entirely. More often than not, words are contextual, which means they convey different understandings in different circumstances. Something easy and natural for our brains is not that easy for any piece of software.

 

This is why the development of software that can interpret language correctly and reliably has become a critical factor in the development of AI across the board. Achieving this level of computational prowess would literally unleash the floodgates of AI development by allowing it to access and ingest virtually any kind of knowledge.

NLU is a vital piece of this puzzle by virtue of its ability to leverage the wealth of language-based information. Language inhabits all aspects of enterprise activity, which means that an AI approach cannot be complete without extracting as much value as possible from this type of data.

A knowledge-based, or symbolic AI approach, leverages a knowledge graph which is an open box. Its structure is created by humans and is understood to represent the real world where concepts are defined and related to each other by semantic relationships. Thanks to knowledge graphs and NLU algorithms, you can read and learn from any text, out-of-the-box, and gain a true understanding of how data is being interpreted and conclusions are being drawn from that interpretation. This is similar to how we as humans are able to create our own specific, domain-oriented knowledge, and it enables AI projects to link its algorithmic results to explicit representations of knowledge.

In 2022, we should see a definitive shift toward this kind of AI approach combining both different techniques. Hybrid AI leverages different techniques to improve overall results and better tackle complex cognitive problems. Hybrid AI is an increasingly popular approach for NLU and natural language processing (NLP). Bringing together the best of AI-based knowledge or symbolic AI and learning models (machine learning, ML) is the most effective way to unlock the value of unstructured language data with the accuracy, speed and scale required by today’s businesses.

Not only will the use of knowledge, symbolic reasoning and semantic understanding produce more accurate results and a more efficient, effective AI environment, it will also reduce the need for cumbersome and resource-intensive training, based on wasteful volumes of documents on expensive, high-speed data infrastructure. Domain-specific knowledge can be added through subject matter experts and/or machine learning algorithms leveraging the analysis of small and pinpointed training sets of data to produce highly accurate, actionable results quickly and efficiently. 

The world of hybrid AI

But why is this transition happening now? Why hasn’t AI been able to harness language-based knowledge previously? We have been led to believe that learning approaches can solve any of our problems. In some cases, they can, but just because ML does well with certain needs and specific contexts doesn’t mean it is always the best method. And we see this all too often when it comes to the ability to understand and process language. Only in the past few years have we seen significant advancements in NLU based on hybrid (or composite) AI approaches.

Rather than throwing one form of AI, with its limited set of tools, at a problem, we can now utilize multiple, different approaches. Each can target the problem from a different angle, using different models, to evaluate and solve the issue in a multi-contextual way. And since each of these techniques can be evaluated independently of one another, it becomes easier to determine which ones deliver the most optimal outcomes.

With the enterprise already having gotten a taste of what AI can do, this hybrid approach is poised to become a strategic initiative in 2022. It produces significant time and cost benefits, while boosting the speed, accuracy and efficiency of analytical and operational processes. To take just one example, the process of annotation is currently performed by select experts, in large part due to the difficulty and expense of training. By combining the proper knowledge repositories and graphs, however, the training can be vastly simplified so that the process itself can be democratized among the knowledge workforce.

More to Come

Of course, research in all forms of AI is ongoing. But we will see particular focus on expanding the knowledge graph and automating ML and other techniques because enterprises are under constant pressure to leverage vast amounts of data quickly and at low cost.

As the year unfolds, we will see steady improvements in the way organizations apply these hybrid models to some of their most core processes. Business automation in the form of email management and search is already in sight. The current keyword-based search approach, for instance, is inherently incapable of absorbing and interpreting entire documents, which is why they can only extract basic, largely non-contextual information. Likewise, automation email management systems can rarely penetrate meaning beyond simple product names and other points of information. In the end, users are left to sort through a long list of hits trying to find the salient pieces of knowledge. This slows down processes, delays decision-making and ultimately hampers productivity and revenue.

Empowering NLU tools with symbolic comprehension under a hybrid framework will give all knowledge-based organizations the ability to mimic the human ability to comprehend entire documents across their intelligent, automated processes.

By , CTO at expert.ai on March 2, 2022 in Artificial Intelligence

Posted on Leave a comment

What is Hybrid AI?

 

Researchers are working to combine the strengths of symbolic AI and neural networks to develop Hybrid AI.

As the research community makes progress in artificial intelligence and deep learning, scientists are increasingly feeling the need to move towards hybrid artificial intelligence. Hybrid AI is touted to solve fundamental problems that deep learning faces today. 

Hybrid AI brings together the best aspects of neural networks and symbolic AI. Combining huge data sets (visual and audio, textual, emails, chat logs, etc.) allows neural networks to extract patterns. Then, rule-based AI systems can manipulate the retrieved information by using algorithms to manipulate symbols.

Researchers are working to develop hybrid AI systems that can figure out simple abstract relations between objects and the reason behind them as effortlessly as a human brain. 

What is symbolic AI?

During the 1960s and 1970s, new technological advances were met with researchers’ increasing desire to understand how machines and nature interact. Researchers believed that using symbolic approaches would inevitably produce an artificially intelligent machine, which was seen as their discipline’s long-term goal.

The “good old-fashioned artificial intelligence” or “GOFAI” was coined by John Haugeland in his 1985 book ‘Artificial Intelligence: The Very Idea‘ that explored artificial intelligence’s ethical and philosophical implications. Since the initial efforts to build thinking computers in the 1950s, research and development in the AI field have followed two parallel approaches: symbolic AI and connectionist AI. 

Symbolic AI (also known as Classical AI) is an area of artificial intelligence research that focuses on attempting to express human knowledge clearly in a declarative form, that is, facts and rules. From the mid-1950s until the late 1980s, there was significant use of symbolic artificial intelligence. On the other hand, in recent years, a connectionist approach such as machine learning with deep neural networks has come to the forefront.

Combining symbolic AI and neural networks 

 

There has been a shift from the symbolic approach in the past few years due to its technical limits. 

According to David Cox, IBM Director at MIT-IBM Watson AI Lab, deep learning and neural networks excel at the “messiness of the world,” but symbolic AI does not. Neural networks meticulously study and compare a large number of annotated instances to discover significant relationships and create corresponding mathematical models. 

Several prominent IT businesses and academic labs have put significant effort into the use of deep learning. Neural networks and deep learning excel at tasks where symbolic AI fails. As a result, it’s being used to tackle complex challenges today. For example, deep learning has made significant contributions to the computer vision revolution with use cases in facial recognition and tuberculosis detection. Language-related activities have also benefited from deep learning breakthroughs.

There are, however, certain limits to deep learning and neural networks. One argument is that the availability of large volumes of data depends on it. In addition, neural networks are also vulnerable to hostile instances, often known as adversarial data, which can manipulate an AI model’s behaviour in unpredictable and harmful ways.

However, when combined with each other, symbolic AI and neural networks can form a good base for developing hybrid AI systems.

Future of hybrid AI 

The hybrid AI model utilises the neural network’s ability to process and evaluate unstructured data while also using symbolic AI techniques. Connectivist viewpoints argue that techniques based on neural networks will eventually provide sophisticated and broadly applicable AI. In 2019, International Conference on Learning Representations (ICLR) featured a paper in which the researchers combined neural networks with rule-based artificial intelligence to create an AI model. This approach has been called the “Neuro-Symbolic Concept Learner” (NCSL); it claims to overcome the difficulties AI faces and to be superior to the sum of its parts. NCSL, a hybrid system of AI developed by researchers at MIT and IBM tackles visual question answering (VQA) problems; the NSCL uses neural networks in conjunction with neural networks with remarkable accuracy. The researchers demonstrated that NCSL was able to handle the VQA dataset CLEVR. Even more important, the hybrid AI model could make outstanding achievements with less training data and overcome two long-standing deep learning challenges.

Even Google search engine is a complex, all-in-one AI system made up of cutting-edge deep learning tools such as Transformers and advanced symbol manipulation tools like the knowledge graph.




Source

Posted on Leave a comment

Let’s discuss Functional NFTs

Functional NFTs are changing the ways we interact with each other and the gaming experience. Earlier, NFTs were limited to products but now it’s putting a value on services too. Now with functional NFTs, you can choose to buy an experience rather than a piece of art. 

Non-Fungible Tokens (NFTs) have stirred up things in the world of art. While the underlying technology behind NFTs remains simple. They have morphed into multiple applications some of which we shall discuss soon. Traditionally there have been five categories of NFTs: Collectibles, Game Assets, Virtual Land, Crypto Art and Others (including domain names, property titles) etc. Currently, there seems to be another category that has been getting some buzz in the industry. This new player is called “Functional NFTs”. 

What are Functional NFTs?

Let’s discuss what Functional NFTs are first. The meaning should be clear from the name itself. NFTs that provide some sort of functionality. It could be a game asset that performs some function. For example, if a game has an avatar as an NFT and it provides certain functionality, then it can be called a Functional NFT. This functionality can be seen as accruing points in a game or giving the player some special power.

Another example can be an NFT created by a restaurant owner. The NFT works as a pass for one person to have dinner on Sunday at the restaurant. Therefore the NFT has some functionality and serves a given purpose. In a similar fashion imagine walking into a club and not having to stand in a line. Well, there can be an NFT for that too. Owning that NFT can give you free access to the club and since you own the NFT, people do not need to check for your ID. 

Normal vs Functional NFTs

Moreover, there has been a heated debate about value accrual in normal NFTs vs Functional NFTs. The argument is that non-functional NFTs are easier to make and are sold quickly on the market. Thus acquiring value quickly. In comparison to that Functional NFTs such as in games need to be thought about. It takes time to build a great experience around the basic utility of the functional NFT.

Consequently taking more time to build value. For example, Axie Infinity, a Pokemon-like game that allows players to collect, breed and battle creatures. It was launched in 2018, but it was quite different then from what it is right now. The developer team had multiple iterations to finesse the game experience. Once the gaming experience was finessed, the NFT assets within the game accrued value. The phenomenon is termed as “Promise Effect” which says that an NFT that promises some experience will accrue value slower than a non-functional NFT.

A new type of Functional NFTs

HODL Valley, a new metaverse gaming project is trying to create a tokenized city. One among many of its features is Functional NFT, but these NFTs take it a step too far. HODL Valley contains around 24 different locations, each with a specific function and utility. These locations are connected to DApps which carry out the functionality for users. These locations can be purchased in-app and the revenues generated by them can be taken home by the NFT owner. For example, let’s say a bank has been represented by an NFT. Since it’s connected to a DApp, it can provide lending and borrowing services. As other users in the game play and use the bank. The NFT owner, who is, in turn, the owner of the bank will be able to generate an income stream from it. That is how functional NFTs have been pitched recently. 

These functional NFTs are bound to change the way we interact with games and real life. With added functionality, individuals can get a unique experience. It’s not just a token anymore which represents value, it’s a function in itself. If NFTs was money then it was only selling products until now. Now, it has started moving into services too.

Source

Posted on Leave a comment

What is Facial Recognition?

What is facial recognition?

Facial recognition is a way of identifying or confirming an individual’s identity using their face. Facial recognition systems can be used to identify people in photos, videos, or in real-time.

Facial recognition is a category of biometric security. Other forms of biometric software include voice recognition, fingerprint recognition, and eye retina or iris recognition. The technology is mostly used for security and law enforcement, though there is increasing interest in other areas of use.

How does facial recognition work?

Many people are familiar with face recognition technology through the FaceID used to unlock iPhones (however, this is only one application of face recognition). Typically, facial recognition does not rely on a massive database of photos to determine an individual’s identity — it simply identifies and recognizes one person as the sole owner of the device, while limiting access to others.

Beyond unlocking phones, facial recognition works by matching the faces of people walking past special cameras, to images of people on a watch list. The watch lists can contain pictures of anyone, including people who are not suspected of any wrongdoing, and the images can come from anywhere — even from our social media accounts. Facial technology systems can vary, but in general, they tend to operate as follows:

Step 1: Face detection

The camera detects and locates the image of a face, either alone or in a crowd. The image may show the person looking straight ahead or in profile.

Step 2: Face analysis

Next, an image of the face is captured and analyzed. Most facial recognition technology relies on 2D rather than 3D images because it can more conveniently match a 2D image with public photos or those in a database. The software reads the geometry of your face. Key factors include the distance between your eyes, the depth of your eye sockets, the distance from forehead to chin, the shape of your cheekbones, and the contour of the lips, ears, and chin. The aim is to identify the facial landmarks that are key to distinguishing your face.

Step 3: Converting the image to data

The face capture process transforms analog information (a face) into a set of digital information (data) based on the person's facial features. Your face's analysis is essentially turned into a mathematical formula. The numerical code is called a faceprint. In the same way that thumbprints are unique, each person has their own faceprint.

Step 4: Finding a match

Your faceprint is then compared against a database of other known faces. For example, the FBI has access to up to 650 million photos, drawn from various state databases. On Facebook, any photo tagged with a person’s name becomes a part of Facebook's database, which may also be used for facial recognition. If your faceprint matches an image in a facial recognition database, then a determination is made.

Of all the biometric measurements, facial recognition is considered the most natural. Intuitively, this makes sense, since we typically recognize ourselves and others by looking at faces, rather than thumbprints and irises. It is estimated that over half of the world's population is touched by facial recognition technology regularly.

How facial recognition is used

The technology is used for a variety of purposes. These include:

Unlocking phones

Various phones, including the most recent iPhones, use face recognition to unlock the device. The technology offers a powerful way to protect personal data and ensures that sensitive data remains inaccessible if the phone is stolen. Apple claims that the chance of a random face unlocking your phone is about one in 1 million.

Law enforcement

Facial recognition is regularly being used by law enforcement. According to this NBC report, the technology is increasing amongst law enforcement agencies within the US, and the same is true in other countries. Police collects mugshots from arrestees and compare them against local, state, and federal face recognition databases. Once an arrestee’s photo has been taken, their picture will be added to databases to be scanned whenever police carry out another criminal search.

Also, mobile face recognition allows officers to use smartphones, tablets, or other portable devices to take a photo of a driver or a pedestrian in the field and immediately compare that photo against to one or more face recognition databases to attempt an identification.

Airports and border control

Facial recognition has become a familiar sight at many airports around the world. Increasing numbers of travellers hold biometric passports, which allow them to skip the ordinarily long lines and instead walk through an automated ePassport control to reach the gate faster. Facial recognition not only reduces waiting times but also allows airports to improve security. The US Department of Homeland Security predicts that facial recognition will be used on 97% of travellers by 2023. As well as at airports and border crossings, the technology is used to enhance security at large-scale events such as the Olympics.

Applications of face recognition.

Finding missing persons

Facial recognition can be used to find missing persons and victims of human trafficking. Suppose missing individuals are added to a database. In that case, law enforcement can be alerted as soon as they are recognized by face recognition — whether it is in an airport, retail store, or other public space.

Reducing retail crime

Facial recognition is used to identify when known shoplifters, organized retail criminals, or people with a history of fraud enter stores. Photographs of individuals can be matched against large databases of criminals so that loss prevention and retail security professionals can be notified when shoppers who potentially represent a threat enter the store.

Improving retail experiences

The technology offers the potential to improve retail experiences for customers. For example, kiosks in stores could recognize customers, make product suggestions based on their purchase history, and point them in the right direction. “Face pay” technology could allow shoppers to skip long checkout lines with slower payment methods.

Banking

Biometric online banking is another benefit of face recognition. Instead of using one-time passwords, customers can authorize transactions by looking at their smartphone or computer. With facial recognition, there are no passwords for hackers to compromise. If hackers steal your photo database, 'liveless' detection – a technique used to determine whether the source of a biometric sample is a live human being or a fake representation – should (in theory) prevent them from using it for impersonation purposes. Face recognition could make debit cards and signatures a thing of the past.

Marketing and advertising

Marketers have used facial recognition to enhance consumer experiences. For example, frozen pizza brand DiGiorno used facial recognition for a 2017 marketing campaign where it analyzed the expressions of people at DiGiorno-themed parties to gauge people’s emotional reactions to pizza. Media companies also use facial recognition to test audience reaction to movie trailers, characters in TV pilots, and optimal placement of TV promotions. Billboards that incorporate face recognition technology – such as London’s Piccadilly Circus – means brands can trigger tailored advertisements. 

Healthcare

Hospitals use facial recognition to help with patient care. Healthcare providers are testing the use of facial recognition to access patient records, streamline patient registration, detect emotion and pain in patients, and even help to identify specific genetic diseases. AiCure has developed an app that uses facial recognition to ensure that people take their medication as prescribed. As biometric technology becomes less expensive, adoption within the healthcare sector is expected to increase.

Tracking student or worker attendance

Some educational institutions in China use face recognition to ensure students are not skipping class. Tablets are used to scan students' faces and match them to photos in a database to validate their identities. More broadly, the technology can be used for workers to sign in and out of their workplaces, so that employers can track attendance.

Recognizing drivers

According to this consumer reportcar companies are experimenting with facial recognition to replace car keys. The technology would replace the key to access and start the car and remember drivers’ preferences for seat and mirror positions and radio station presets.

Monitoring gambling addictions

Facial recognition can help gambling companies protect their customers to a higher degree. Monitoring those entering and moving around gambling areas is difficult for human staff, especially in large crowded spaces such as casinos. Facial recognition technology enables companies to identify those who are registered as gambling addicts and keeps a record of their play so staff can advise when it is time to stop. Casinos can face hefty fines if gamblers on voluntary exclusion lists are caught gambling.

Examples of facial recognition technology

  1. Amazon previously promoted its cloud-based face recognition service named Rekognition to law enforcement agencies. However, in a June 2020 blog post, the company announced it was planning a one-year moratorium on the use of its technology by police. The rationale for this was to allow time for US federal laws to be initiated, to protect human rights and civil liberties.
  2. Apple uses facial recognition to help users quickly unlock their phones, log in to apps, and make purchases.
  3. British Airways enables facial recognition for passengers boarding flights from the US. Travellers' faces can be scanned by a camera to have their identity verified to board their plane without showing their passport or boarding pass. The airline has been using the technology on UK domestic flights from Heathrow and is working towards biometric boarding on international flights from the airport.
  4. Cigna, a US-based healthcare insurer, allows customers in China to file health insurance claims which are signed using a photo, rather than a written signature, in a bid to cut down on instances of fraud.
  5. Coca-Cola has used facial recognition in several ways across the world. Examples include rewarding customers for recycling at some of its vending machines in China, delivering personalized ads on its vending machines in Australia, and for event marketing in Israel.
  6. Facebook began using facial recognition in the US in 2010 when it automatically tagged people in photos using its tag suggestions tool. The tool scans a user's face and offers suggestions about who that person is. Since 2019, Facebook has made the feature opt-in as part of a drive to become more privacy focused. Facebook provides information on how you can opt-in or out of face recognition here.
  7. Google incorporates the technology into Google Photos and uses it to sort pictures and automatically tag them based on the people recognized.
  8. MAC make-up, uses facial recognition technology in some of its brick-and-mortar stores, allowing customers to virtually "try on" make-up using in-store augmented reality mirrors.
  9. McDonald’s has used facial recognition in its Japanese restaurants to assess the quality of customer service provided there, including analyzing whether its employees are smiling while assisting customers.
  10. Snapchat is one of the pioneers of facial recognition software: it allows brands and organizations to create filters which mould to the user’s face — hence the ubiquitous puppy dog faces and flower crown filters seen on social media.

Technology companies that provide facial recognition technology include:

  • Kairos
  • Noldus
  • Affectiva
  • Sightcorp
  • Nviso

Advantages of face recognition

Aside from unlocking your smartphone, facial recognition brings other benefits:

Increased security

On a governmental level, facial recognition can help to identify terrorists or other criminals. On a personal level, facial recognition can be used as a security tool for locking personal devices and for personal surveillance cameras.

Reduced crime

Face recognition makes it easier to track down burglars, thieves, and trespassers. The sole knowledge of the presence of a face recognition system can serve as a deterrence, especially to petty crime. Aside from physical security, there are benefits to cybersecurity as well. Companies can use face recognition technology as a substitute for passwords to access computers. In theory, the technology cannot be hacked as there is nothing to steal or change, as is the case with a password.

Removing bias from stop and search

Public concern over unjustified stops and searches is a source of controversy for the police — facial recognition technology could improve the process. By singling out suspects among crowds through an automated rather than human process, face recognition technology could help reduce potential bias and decrease stops and searches on law-abiding citizens.

Greater convenience

As the technology becomes more widespread, customers will be able to pay in stores using their face, rather than pulling out their credit cards or cash. This could save time in checkout lines. Since there is no contact required for facial recognition as there is with fingerprinting or other security measures – useful in the post-COVID world – facial recognition offers a quick, automatic, and seamless verification experience.

Faster processing

The process of recognizing a face takes only a second, which has benefits for the companies that use facial recognition. In an era of cyber-attacks and advanced hacking tools, companies need both secure and fast technologies. Facial recognition enables quick and efficient verification of a person’s identity.

Integration with other technologies

Most facial recognition solutions are compatible with most security software. In fact, it is easily integrated. This limits the amount of additional investment required to implement it.

Disadvantages of face recognition

While some people do not mind being filmed in public and do not object to the use of facial recognition where there is a clear benefit or rationale, the technology can inspire intense reactions from others. Some of the disadvantages or concerns include:

Surveillance

Some worry that the use of facial recognition along with ubiquitous video cameras, artificial intelligence, and data analytics creates the potential for mass surveillance, which could restrict individual freedom. While facial recognition technology allows governments to track down criminals, it could also allow them to track down ordinary and innocent people at any time.

Scope for error

Facial recognition data is not free from error, which could lead to people being implicated for crimes they have not committed. For example, a slight change in camera angle or a change in appearance, such as a new hairstyle, could lead to error. In 2018, Newsweek reported that Amazon’s facial recognition technology had falsely identified 28 members of the US Congress as people arrested for crimes.

Breach of privacy

The question of ethics and privacy is the most contentious one. Governments have been known to store several citizens' pictures without their consent. In 2020, the European Commission said it was considering a ban on facial recognition technology in public spaces for up to five years, to allow time to work out a regulatory framework to prevent privacy and ethical abuses.

Massive data storage

Facial recognition software relies on machine learning technology, which requires massive data sets to “learn” to deliver accurate results. Such large data sets require robust data storage. Small and medium-sized companies may not have sufficient resources to store the required data.

Facial recognition security - how to protect yourself

While biometric data is generally considered one of the most reliable authentication methods, it also carries significant risk. That’s because if someone’s credit card details are hacked, that person has the option to freeze their credit and take steps to change the personal information that was breached. What do you do if you lose your digital ‘face’?

Around the world, biometric information is being captured, stored, and analyzed in increasing quantities, often by organizations and governments, with a mixed record on cybersecurity. A question increasingly being asked is, how safe is the infrastructure that holds and processes all this data?

As facial recognition software is still in its relative infancy, the laws governing this area are evolving (and sometimes non-existent). Regular citizens whose information is compromised have relatively few legal avenues to pursue. Cybercriminals often elude the authorities or are sentenced years after the fact, while their victims receive no compensation and are left to fend for themselves.

As the use of facial recognition becomes more widespread, the scope for hackers to steal your facial data to commit fraud — increases.

Biometric technology offers very compelling security solutions. Despite the risks, the systems are convenient and hard to duplicate. These systems will continue to develop in the future — the challenge will be to maximize their benefits while minimizing their risks.

Source

Posted on Leave a comment

What is the IoT?

The Internet of Things (IoT) describes the network of physical objects—“things”—that are embedded with sensors, software, and other technologies for the purpose of connecting and exchanging data with other devices and systems over the internet. These devices range from ordinary household objects to sophisticated industrial tools. With more than 7 billion connected IoT devices today, experts are expecting this number to grow to 10 billion by 2020 and 22 billion by 2025. 

Why is Internet of Things (IoT) so important?

Over the past few years, IoT has become one of the most important technologies of the 21st century. Now that we can connect everyday objects—kitchen appliances, cars, thermostats, baby monitors—to the internet via embedded devices, seamless communication is possible between people, processes, and things.

By means of low-cost computing, the cloud, big data, analytics, and mobile technologies, physical things can share and collect data with minimal human intervention. In this hyperconnected world, digital systems can record, monitor, and adjust each interaction between connected things. The physical world meets the digital world—and they cooperate.

What technologies have made IoT possible?

While the idea of IoT has been in existence for a long time, a collection of recent advances in a number of different technologies has made it practical.

  • Access to low-cost, low-power sensor technology. Affordable and reliable sensors are making IoT technology possible for more manufacturers.
  • Connectivity. A host of network protocols for the internet has made it easy to connect sensors to the cloud and to other “things” for efficient data transfer.
  • Cloud computing platforms. The increase in the availability of cloud platforms enables both businesses and consumers to access the infrastructure they need to scale up without actually having to manage it all.
  • Machine learning and analytics. With advances in machine learning and analytics, along with access to varied and vast amounts of data stored in the cloud, businesses can gather insights faster and more easily. The emergence of these allied technologies continues to push the boundaries of IoT and the data produced by IoT also feeds these technologies.
  • Conversational artificial intelligence (AI). Advances in neural networks have brought natural-language processing (NLP) to IoT devices (such as digital personal assistants Alexa, Cortana, and Siri) and made them appealing, affordable, and viable for home use.

What is industrial IoT?

Industrial IoT (IIoT) refers to the application of IoT technology in industrial settings, especially with respect to instrumentation and control of sensors and devices that engage cloud technologies. Refer to thisTitan use case PDF for a good example of IIoT. Recently, industries have used machine-to-machine communication (M2M) to achieve wireless automation and control. But with the emergence of cloud and allied technologies (such as analytics and machine learning), industries can achieve a new automation layer and with it create new revenue and business models. IIoT is sometimes called the fourth wave of the industrial revolution, or Industry 4.0. The following are some common uses for IIoT:

  • Smart manufacturing
  • Connected assets and preventive and predictive maintenance
  • Smart power grids
  • Smart cities
  • Connected logistics
  • Smart digital supply chains
tractor
  •  

What are IoT applications?

Business-ready, SaaS IoT Applications

IoT Intelligent Applications are prebuilt software-as-a-service (SaaS) applications that can analyze and present captured IoT sensor data to business users via dashboards. 

IoT applications use machine learning algorithms to analyze massive amounts of connected sensor data in the cloud. Using real-time IoT dashboards and alerts, you gain visibility into key performance indicators, statistics for mean time between failures, and other information. Machine learning–based algorithms can identify equipment anomalies and send alerts to users and even trigger automated fixes or proactive counter measures.

With cloud-based IoT applications, business users can quickly enhance existing processes for supply chains, customer service, human resources, and financial services. There’s no need to recreate entire business processes.

What are some ways IoT applications are deployed?

The ability of IoT to provide sensor information as well as enable device-to-device communication is driving a broad set of applications. The following are some of the most popular applications and what they do.

Create new efficiencies in manufacturing through machine monitoring and product-quality monitoring.

Machines can be continuously monitored and analyzed to make sure they are performing within required tolerances. Products can also be monitored in real time to identify and address quality defects.

Improve the tracking and “ring-fencing” of physical assets.

Tracking enables businesses to quickly determine asset location. Ring-fencing allows them to make sure that high-value assets are protected from theft and removal.

Use wearables to monitor human health analytics and environmental conditions.

IoT wearables enable people to better understand their own health and allow physicians to remotely monitor patients. This technology also enables companies to track the health and safety of their employees, which is especially useful for workers employed in hazardous conditions.

Drive efficiencies and new possibilities in existing processes.

One example of this is the use of IoT to increase efficiency and safety in connected logistics for fleet management. Companies can use IoT fleet monitoring to direct trucks, in real time, to improve efficiency.

Enable business process changes.

An example of this is the use of IoT devices for connected assets to monitor the health of remote machines and trigger service calls for preventive maintenance. The ability to remotely monitor machines is also enabling new product-as-a-service business models, where customers no longer need to buy a product but instead pay for its usage.

map

What industries can benefit from IoT?

Organizations best suited for IoT are those that would benefit from using sensor devices in their business processes.

Manufacturing

Manufacturers can gain a competitive advantage by using production-line monitoring to enable proactive maintenance on equipment when sensors detect an impending failure. Sensors can actually measure when production output is compromised. With the help of sensor alerts, manufacturers can quickly check equipment for accuracy or remove it from production until it is repaired. This allows companies to reduce operating costs, get better uptime, and improve asset performance management.

Automotive

The automotive industry stands to realize significant advantages from the use of IoT applications. In addition to the benefits of applying IoT to production lines, sensors can detect impending equipment failure in vehicles already on the road and can alert the driver with details and recommendations. Thanks to aggregated information gathered by IoT-based applications, automotive manufacturers and suppliers can learn more about how to keep cars running and car owners informed.

Transportation and Logistics

Transportation and logistical systems benefit from a variety of IoT applications. Fleets of cars, trucks, ships, and trains that carry inventory can be rerouted based on weather conditions, vehicle availability, or driver availability, thanks to IoT sensor data. The inventory itself could also be equipped with sensors for track-and-trace and temperature-control monitoring. The food and beverage, flower, and pharmaceutical industries often carry temperature-sensitive inventory that would benefit greatly from IoT monitoring applications that send alerts when temperatures rise or fall to a level that threatens the product.

Retail

IoT applications allow retail companies to manage inventory, improve customer experience, optimize supply chain, and reduce operational costs. For example, smart shelves fitted with weight sensors can collect RFID-based information and send the data to the IoT platform to automatically monitor inventory and trigger alerts if items are running low. Beacons can push targeted offers and promotions to customers to provide an engaging experience.

Public Sector

The benefits of IoT in the public sector and other service-related environments are similarly wide-ranging. For example, government-owned utilities can use IoT-based applications to notify their users of mass outages and even of smaller interruptions of water, power, or sewer services. IoT applications can collect data concerning the scope of an outage and deploy resources to help utilities recover from outages with greater speed.

Healthcare

IoT asset monitoring provides multiple benefits to the healthcare industry. Doctors, nurses, and orderlies often need to know the exact location of patient-assistance assets such as wheelchairs. When a hospital’s wheelchairs are equipped with IoT sensors, they can be tracked from the IoT asset-monitoring application so that anyone looking for one can quickly find the nearest available wheelchair. Many hospital assets can be tracked this way to ensure proper usage as well as financial accounting for the physical assets in each department.

General Safety Across All Industries

In addition to tracking physical assets, IoT can be used to improve worker safety. Employees in hazardous environments such as mines, oil and gas fields, and chemical and power plants, for example, need to know about the occurrence of a hazardous event that might affect them. When they are connected to IoT sensor–based applications, they can be notified of accidents or rescued from them as swiftly as possible. IoT applications are also used for wearables that can monitor human health and environmental conditions. Not only do these types of applications help people better understand their own health, they also permit physicians to monitor patients remotely.

trends


How is IoT changing the world? Take a look at connected cars.

IoT is reinventing the automobile by enabling connected cars. With IoT, car owners can operate their cars remotely—by, for example, preheating the car before the driver gets in it or by remotely summoning a car by phone. Given IoT’s ability to enable device-to-device communication, cars will even be able to book their own service appointments when warranted.

The connected car allows car manufacturers or dealers to turn the car ownership model on its head. Previously, manufacturers have had an arms-length relationship with individual buyers (or none at all). Essentially, the manufacturer’s relationship with the car ended once it was sent to the dealer. With connected cars, automobile makers or dealers can have a continuous relationship with their customers. Instead of selling cars, they can charge drivers usage fees, offering a “transportation-as-a-service” using autonomous cars. IoT allows manufacturers to upgrade their cars continuously with new software, a sea-change difference from the traditional model of car ownership in which vehicles immediately depreciate in performance and value.

Source