Author: VIE

  • How Blockchain Technology Can Drive Innovation-Based Sustainability

    How Blockchain Technology Can Drive Innovation-Based Sustainability

    04 Feb’23 | By Amit Ghosh

    How Blockchain Technology Can Drive Innovation-Based Sustainability

    As the country pushes its sustainability agenda, the use of new technology deserves a closer look in order to make a difference in this cause

    When we examine blockchain’s role in environmental, social, and governance (ESG) policies and markets around the world, we can see how technology is already changing ESG markets.

    If more Indian companies adopt blockchain as part of their sustainability practises and policies, we will be one step closer to realising the ambitious goals that the country and the world have set for themselves

    As the world moves towards a greener future, it is imperative for businesses to build and lead with sustainable practices. India, one of the most populous countries in the world, has a tremendous stake in the global responsibility towards building a more sustainable world. The responsibility is especially magnified given the country’s reputation as a major economic powerhouse that ranks among the world’s largest energy-consuming countries. 

    Link

  • Will crypto make us live longer?

    Will crypto make us live longer?


    Imagine a world where patients and their families can directly fund scientists developing the next breakthrough drug or treatment that they need. A world in which drug development is a collaborative, open, and decentralized process. Such a future is not only possible, but the decentralized science movement is making it a reality.

    Through blockchain, crypto, and NFTs of course. And that’s exactly what we are going to uncover on today’s CoinMarketCap episode:


     🔵 Coin Market Cap is the world’s most-referenced price-tracking website for cryptoassets in the rapidly growing cryptocurrency space. Its mission is to make crypto accessible all around the world through data and content.

    DeSci Foundation
    “Open science,
    fair peer-review,
    efficient funding.

    We support the development of a more verifiable, more open, and fairer ecosystem for science and scientists.”

  • Ground-Breaking Research Finds 11 Multidimensional Universe Inside the Human Brain

    Ground-Breaking Research Finds 11 Multidimensional Universe Inside the Human Brain

    The human brain is capable of creating structures in up to 11 dimensions, according to scientists. According to a study published in Frontiers in Computational Neuroscience, the Human brain can deal and create in up to 11 dimensions.

    According to the Blue Brain Project, the dimensions are not interpreted in the traditional sense of a dimension, which most of us understand. Scientists found exciting new facts about the intricacy of the human brain as part of the Blue Brain Project.

    Neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, said: “We found a world that we had never imagined. There are tens of millions of these objects, even in a speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

    Traditional mathematical viewpoints were found to be inapplicable and unproductive once researchers studied the human brain.

    The graphic tries to depict something that can’t be seen – a multi-dimensional cosmos of structures and places. A computerised replica of a section of the neocortex, the brain’s most evolved portion, may be found on the left. On the right, several forms of various sizes and geometries are used to illustrate constructions with dimensions ranging from one to seven and beyond. The central “black-hole” represents a collection of multi-dimensional voids or cavities. In a new paper published in Frontiers in Computational Neuroscience, researchers from the Blue Brain Project claim that groupings of neurons coupled into such holes provide the necessary link between brain structure and function. Blue Brain Project is the source of this image.

    “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly,” Markram revealed.

    Instead, scientists opted to investigate algebraic topology. Algebraic topology is a branch of mathematics that studies topological spaces using techniques from abstract algebra. In applying this approach in their latest work, scientists from the Blue Brain Project were joined by mathematicians Kathryn Hess from EPFL and Ran Levi from Aberdeen University.

    Professor Hess explained: “Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures – the trees in the forest – and see the empty spaces – the clearings – all at the same time.”

    The researchers observed that brain structures are formed when a collection of neurons – cells in the brain that carry impulses – form a clique. Each neuron in the group is connected to every other neuron in the group in a unique way, resulting in the formation of a new entity. The ‘dimension’ of an item increases as the number of neurons in a clique increases.

    The scientists used algebraic topography to model the architecture within a virtual brain they developed with the help of computers. They subsequently confirmed their findings by doing experiments on genuine brain tissue. The researchers discovered that by adding inputs to the virtual brain, cliques of increasingly HIGHER dimensions formed. In addition, investigators detected voids between the cliques.

    Ran Levi from Aberdeen University said: “The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner. It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates.”

    The new information on the human brain provides previously unseen insights into how the brain processes information. Scientists have said, however, that it is still unclear how the cliques and cavities arise in such a unique way.

    The new research could someday help scientists solve one of neuroscience’s greatest mysteries: where does the brain ‘store’ memories.

    Reference: Peer reviewed research

    Zeeshan Ali

    November 06, 2022

  • Ransomware is already out of control. AI-powered ransomware could be ‘terrifying.’

    Ransomware is already out of control. AI-powered ransomware could be ‘terrifying.’

    Hiring AI experts to automate ransomware could be the next step for well-endowed ransomware groups that are seeking to scale up their attacks.
     

    In the perpetual battle between cybercriminals and defenders, the latter have always had one largely unchallenged advantage: The use of AI and machine learning allows them to automate a lot of what they do, especially around detecting and responding to attacks. This leg-up hasn’t been nearly enough to keep ransomware at bay, but it has still been far more than what cybercriminals have ever been able to muster in terms of AI and automation.

    That’s because deploying AI-powered ransomware would require AI expertise. And the ransomware gangs don’t have it. At least not yet.

    But given the wealth accumulated by a number of ransomware gangs in recent years, it may not be long before attackers do bring aboard AI experts of their own, prominent cybersecurity authority Mikko Hyppönen said.

    Some of these groups have so much cash — or bitcoin, rather — that they could now potentially compete with legit security firms for talent in AI and machine learning, according to Hyppönen, the chief research officer at cybersecurity firm WithSecure.

    Ransomware gang Conti pulled in $182 million in ransom payments during 2021, according to blockchain data platform Chainalysis. Leaks of Conti’s chats suggest that the group may have invested some of its take in pricey “zero day” vulnerabilities and the hiring of penetration testers.

    “We have already seen [ransomware groups] hire pen testers to break into networks to figure out how to deploy ransomware. The next step will be that they will start hiring ML and AI experts to automate their malware campaigns,” Hyppönen told Protocol.

    “It’s not a far reach to see that they will have the capability to offer double or triple salaries to AI/ML experts in exchange for them to go to the dark side,” he said. “I do think it’s going to happen in the near future — if I would have to guess, in the next 12 to 24 months.”

    If this happens, Hyppönen said, “it would be one of the biggest challenges we’re likely to face in the near future.”

    AI for scaling up ransomware

    While doom-and-gloom cybersecurity predictions are abundant, with two decades of experience on matters of cybercrime, Hyppönen is not just any prognosticator. He has been with his current company, which until recently was known as F-Secure, since 1991 and has been researching — and vying with — cybercriminals since the early days of the concept.

    In his view, the introduction of AI and machine learning to the attacker side would be a distinct change of the game. He’s not alone in thinking so.

    When it comes to ransomware, for instance, automating large portions of the process could mean an even greater acceleration in attacks, said Mark Driver, a research vice president at Gartner.

    Currently, ransomware attacks are often very tailored to the individual target, making the attacks more difficult to scale, Driver said. Even still, the number of ransomware attacks doubled year-over-year in 2021, SonicWall has reported — and ransomware has been getting more successful as well. The percentage of affected organizations that agreed to pay a ransom shot up to 58% in 2021, from 34% the year before, Proofpoint has reported.

    However, if attackers were able to automate ransomware using AI and machine learning, that would allow them to go after an even wider range of targets, according to Driver. That could include smaller organizations, or even individuals.

    “It’s not worth their effort if it takes them hours and hours to do it manually. But if they can automate it, absolutely,” Driver said. Ultimately, “it’s terrifying.”

    The prediction that AI is coming to cybercrime in a big way is not brand new, but it still has yet to manifest, Hyppönen said. Most likely, that’s because the ability to compete with deep-pocketed enterprise tech vendors to bring in the necessary talent has always been a constraint in the past.

    The huge success of the ransomware gangs in 2021, predominantly Russia-affiliated groups, would appear to have changed that, according to Hyppönen. Chainalysis reports it tracked ransomware payments totaling $602 million in 2021, led by Conti’s $182 million. The ransomware group that struck the Colonial Pipeline, DarkSide, earned $82 million last year, and three other groups brought in more than $30 million in that single year, according to Chainalysis.

    Hyppönen estimated that less than a dozen ransomware groups might have the capacity to invest in hiring AI talent in the next few years, primarily gangs affiliated with Russia.

    ‘We would definitely not miss it’

    If cybercrime groups hire AI talent with some of their windfall, Hyppönen believes the first thing they’ll do is automate the most manually intensive parts of a ransomware campaign. TThe actual execution of a ransomware attack remains difficult, he said.

    “How do you get it on 10,000 computers? How do you find a way inside corporate networks? How do you bypass the different safeguards? How do you keep changing the operation, dynamically, to actually make sure you’re successful?” Hyppönen said. “All of that is manual.”

    Monitoring systems, changing the malware code, recompiling it and registering new domain names to avoid defenses — things it takes humans a long time to do — would all be fairly simple to do with automation. “All of this is done in an instant by machines,” Hyppönen said.

    That means it should be very obvious when AI-powered automation comes to ransomware, according to Hyppönen.

    “This would be such a big shift, such a big change,” he said. “We would definitely not miss it.”

    But would the ransomware groups really decide to go to all this trouble? Allie Mellen, an analyst at Forrester, said she’s not as sure. Given how successful ransomware groups are already, Mellen said it’s unclear why they would bother to take this route.

    “They’re having no problem with the approaches that they’re taking right now,” she said. “If it ain’t broke, don’t fix it.”

    Others see a higher likelihood of AI playing a role in attacks such as ransomware. Like defenders, ransomware gangs clearly have a penchant for evolving their techniques to try to stay ahead of the other side, said Ed Bowen, managing director for the AI Center of Excellence at Deloitte.

    “I’m expecting it — I expect them to be using AI to improve their ability to get at this infrastructure,” Bowen said. “I think that’s inevitable.”

    Lower barrier to entry

    While AI talent is in extremely short supply right now, that will start to change in coming years as a wave of people graduate from university and research programs in the field, Bowen noted.

    The barriers to entry in the AI field are also going lower as tools become more accessible to users, Hyppönen said.

    “Today, all security companies rely heavily on machine learning — so we know exactly how hard it is to hire experts in this field. Especially people who have expertise both in cybersecurity and in machine learning. So these are hard people to recruit,” he told Protocol. “However, it’s becoming easier to become an expert, especially if you don’t need to be a world-class expert.”

    That dynamic could increase the pool of candidates for cybercrime organizations who are, simultaneously, richer and “more powerful than ever before,” Hyppönen said.

    Should this future come to pass, it will have massive implications for cyber defenders, in the event that a greater volume of attacks — and attacks against a broader range of targets — will be the result.

    Among other things, this would likely mean that the security industry would itself be looking to compete harder than ever for AI talent, if only to try to stay ahead of automated ransomware and other AI-powered threats.

    Between attackers and defenders, “you’re always leapfrogging each other” on technical capabilities, Driver said. “It’s a war of trying to get ahead of the other side.”

  • Top 5 Real-World Applications for Natural Language Processing

    Top 5 Real-World Applications for Natural Language Processing

    Emerging technologies have greatly facilitated our daily lives. For instance, when you are making yourself dinner but want to call your Mom for the secret recipe, you don’t have to stop what you are doing and dial the number to make the phone call. Instead, all you need to do is to simply speak out — “Hey Siri, call Mom.” And your iPhone automatically makes the call for you.

    The application is simple enough, but the technology behind it could be sophisticated. The magic that makes the aforementioned scenario possible is natural language processing (NLP). NLP is far more than a pillar for building Siri. It can also empower many other AI-infused applications in the real world.

    This article first explains what NLP is and later moves on to introduce five real-world applications of NLP.

    What is NLP?

    From chatbots to Siri, from virtual support agents to knowledge graphs, the application and usage of NLP are ubiquitous in our daily life. NLP stands for “Natural Language Processing”. Simply put, NLP is the ability of a machine to understand human language. It is the bridge that enables humans to directly interact and communicate with machines. NLP is a subfield of artificial intelligence (AI) and in Bill Gates’s words, “NLP is the pearl in the crown of AI.”

    With the ever-expanding market size of NLP, countless companies are investing heavily in this industry, and their product lines vary. Many different but specific systems for various tasks and needs can be built by leveraging the power of NLP.

    The Five Real World NLP Applications

    The most popular exciting and flourishing real-world applications of NLP include: Conversational user interface, AI-powered call quality assessment, Intelligent outbound calls, AI-powered call operators, and knowledge graphs, to name a few.

    Chatbots in E-commerce

    Over five years ago, Amazon already realized the potential benefit of applying NLP to their customer service channels. Back then, when customers had issues with their product orderings, the only way they could resort was by calling the customer service agents. However, what they could get from the other side of the phone was “Your call is important to us. Please hold, we’re currently experiencing a high call load. “ most of the time. Thankfully, Amazon immediately realized the damaging effect this could have on their brand image and tried to build chatbots.

    Nowadays, when you want to quickly get, for example, a refund online, there’s a much more convenient way! All you need to do is to activate the Amazon customer service chatbot and type in your ordering information and make a refund request. The chatbot interacts and replies the same way a real human does. Apart from the chatbots that deal with post-sales customer experience, chatbots also offer pre-sales consulting. If you have any questions about the product you are going to buy, you can simply chat with a bot and get the answers.

    E-commerce chatbots.
    E-commerce chatbots.

    With the emergence of new concepts like metaverse, NLP can do more than power AI chatbots. Avatars for customer support in the metaverse rely on the NLP technology. Giving customers more realistic chatting experiences.

    Customer support avatar in metaverse.
    Customer support avatar in the metaverse.

    Conversational User Interface

    Another more trendy and promising application is interactive systems. Many well-recognized companies are betting big on CUI ( Conversational user interface). CUI is the general term to describe those user interfaces for computers that can simulate conversations with real human beings.

    The most common CUIs in our everyday life are Apple’s Siri, Microsoft’s Cortana, Google’s Google Assistant, Amazon’s Alexa, etc.

    Apple’s Siri is a common example of conversational user interface.
    Apple’s Siri is a common example of a conversational user interface.

    In addition, CUIs can also be embedded into cars, especially EVs (electric vehicles). NIO, an automobile manufacturer dedicated to designing and developing EVs, launched its own set of CUI named NOMI in 2018. Visually, the CUIs in cars can work in the same way as Siri. Drivers can focus on steering the car while asking the CUI to adjust A/C temperature, play a song, lock windows/doors, navigate drivers to the nearest gas station, etc.

    Conversational user interface in cars.
    The conversational user interface in cars.

    The Algorithm Behind

    Despite all the fancy algorithms the technical media have boasted about, one of the most fundamental ways to build a chatbot is to construct and organize FAQ pairs(or more straightforwardly, question-answer pairs) and use NLP algorithms to figure out if the user query matches anyone of your FAQ knowledge base. A simple FAQ example would be like this:

    Q: Can I have some coffee?

    A: No, I’d rather have some ribs.

    Now that this FAQ pair is already stored in your NLP system, the user can now simply ask a similar question for example: “coffee, please!”. If your algorithm is smart enough, it will figure out that “coffee, please” has a great resemblance to “Can I have some coffee?” and will output the corresponding answer “No, I’d rather have some ribs.” And that’s how things are done.

    For a very long time, FAQ search algorithms are solely based on inverted indexing. In this case, you first do tokenization on the original sentence and put tokens and documents into systems like ElasticSearch, which uses inverted-index for indexing and algorithms like TF-IDF or BM25 for scoring.

    This algorithm works just as fine until the deep learning era arrives. One of the most substantial problems with the algorithm above is that neither tokenization nor inverted indexing takes into account the semantics of the sentences. For instance, in the example above, users could say “ Can I have a cup of Cappuccino” instead. Now with tokenization and inverted-indexing, there’s a very big chance that the system won’t recognize “coffee” and “a cup of Cappuccino” as the same thing and would thus fail to understand the sentence. AI engineers have to do a lot of workarounds for these kinds of issues.

    But things got much better with deep learning. With pre-trained models like BERT and pipelines like Towhee, we can easily encode all sentences into vectors and store them in a vector database, for example, Milvus, and simply calculate vector distance to figure out the semantic resembles of sentences.

    The algorithm behind conversational user interfaces.

    AI-powered Call Quality Control

    Call centers are indispensable for many large companies that care about customer experience. To better spot issues and improve call quality, assessment is necessary. However, the problem is that call centers of large multi-national companies receive tremendous amounts of inbound calls per day. Therefore, it is impractical to listen to each of the millions of calls and make the evaluation. Most of the time, when you hear “in order to improve our service, this call could be recorded.” from the other end of the phone, it doesn’t necessarily mean your call would be checked for quality of service. In fact, even in big organizations, only 2%-3% of the calls would be replayed and checked manually by quality control people.

    A call center. Image source: Pexels by Tima Miroshnichenko.

    This is where NLP can help. An AI-powered call quality control engine powered by NLP can automatically spot the issues incalls and can handle massive volumes of calls in a relatively short period of time. The engine helps detect if the call operator uses the proper opening and ending sentences, and avoids that banned slang and taboo words in the call. This would easily increase the check rate from 2%-3% to 100%, with even less manpower and other costs.

    With a typical AI-powered call quality control service, users need to first upload the call recordings to the service. Then the technology of Automatic speech recognition (ASR) is used to transcribe the audio files into texts. All the texts are subsequently vectorized using deep learning models and subsequently stored in a vector database. The service compares the similarity between the text vectors and vectors generated from a certain set of criteria such as taboo word vectors and vectors of desired opening and closing sentences. With efficient vector similarity search, handling great volumes of call recordings can be much more accurate and less time-consuming.

    Intelligent outbound calls

    Believe it or not, some of the phone calls you receive are not from humans! Chances are that it is a robot talking from the other side of the call. To reduce operation costs, some companies might leverage AI phone calls for marketing purposes and much more. Google launched Google Duplex back in 2018, a system that can conduct human-computer conversations and accomplish real-world tasks over the phone. The mechanism behind AI phone calls is pretty much the same as that behind chatbots.

    Google assistant.
    A user asks the Google Assistant for an appointment, which the Assistant then schedules by having Duplex call the business. Image source: Google AI blog.

    In other cases, you might have also heard something like this on the phone:

    “Thank you for calling. To set up a new account, press 1. To modify your password to an existing account, press 2. To speak to our customer service agent, press 0.”,

    or in recent years, something like (with a strong robot accent):

    “Please tell me what I can help you with. For example, You can ask me ‘check the balance of my account’.”

    This is known as interactive voice response (IVR). It is an automated phone system that interacts with callers and performs based on the answers and actions of the callers. The callers are usually offered some choices via a menu. And then their choice will decide how the phone call system acts. If the user request is too complex, the system can route callers to a human agent. This can greatly reduce labor costs and save time for companies.

    Intents are usually very helpful when dealing with calls like these. An intent is a group of sentences or dialects representing a certain user intention. For example, “weather forecast” can be intent, and this intent can be triggered with different sentences. See the picture of a Google Dialogflow example below. Intents can be organized together to accomplish complicated interactive human-computer conversations. Like booking a restaurant, ordering a flight ticket, etc.

    Google Dialogflow.
    Google Dialogflow.

    AI-powered call operators

    By adopting the technology of NLP, companies can carry call operation services to the next level. Conventionally, call operators need to look up a hundred page-long professional manual to deal with each call from customers and solve each of the user problems case by case. This process is extremely time-consuming and for most of the time cannot satisfy callers with desirable solutions. However, with an AI-powered call center, dealing with customer calls can be both cozy and efficient.

    AI-aided call operators with greater efficiency.
    AI-aided call operators with greater efficiency. Image source: Pexels by MART PRODUCTION.

    When a customer dials in, the system immediately searches for the customer and their ordering information in the database so that the call operator can have a general idea of the case, like how old the customer is, their marriage status, things they have purchased in the past, etc. During the conversation, the whole chat will be recorded with a live chat log shown on the screen (thanks to living Automatic Speech Recognition). Moreover, when a customer asks a hard question or starts complaining, the machine will catch it automatically, look into the AI database, and tell you what is the best way to respond. With a decent deep learning model, your service could always give your customer >99% correct answers to their questions and can always handle customers’ complaints with the most proper words.

    Knowledge graph

    A knowledge graph is an information-based graph that consists of nodes, edges, and labels. Where a node (or a vertex) usually represents an entity. It could be a person, a place, an item, or an event. Edges are the lines connecting the nodes. There are also labels that signify the connection or relationship between a pair of nodes. A typical knowledge graph example is shown below:

    A sample knowledge graph. Source: A guide to Knowledge Graphs.

    The raw data for constructing a knowledge graph may come from various sources — unstructured docs, semi-structured data, and structured knowledge. Various algorithms must be applied to these data so as to extract entities (nodes) and the relationship between entities (edges). To name a few, one needs to do entity recognition, relations extracting, label mining, entity linking. To build a knowledge graph with data in docs, for instance, we need to first use deep learning pipelines to generate embeddings and store them in a vector database.

    Once the knowledge graph is constructed, you can see it as the underlying pillar for many more specific applications like smart search engines, question-answering systems, recommending systems, advertisements, and more.

    Endnote

    This article introduces the top five real-world NLP applications. Leveraging NLP in your business can greatly reduce operational costs and improve user experience. Of course, apart from the five applications introduced in this article, NLP can facilitate more business scenarios including social media analytics, translation, sentiment analysis, meeting summarizing, and more.

    There are also a bunch of NLP+, or more generally, AI+ concepts that are getting more and more popular these few years. For example, with AI + RPA (Robotic process automation). You can easily build smart pipelines that complete workflows automatically for you, such as an expense reimbursement workflow where you just need to upload your receipt, and AI + RPA will do all the rest for you. There’s also AI + OCR, where you just need to take a picture of, say, a contract, and AI will tell you if there’s a mistake in your contract, say, the telephone number of a company doesn’t match the number shown in Google search.

    Source

  • Researchers Find Way to Run Malware on iPhone Even When It’s OFF

    Researchers Find Way to Run Malware on iPhone Even When It’s OFF

    A first-of-its-kind security analysis of iOS Find My function has demonstrated a novel attack surface that makes it possible to tamper with the firmware and load malware onto a Bluetooth chip that’s executed while an iPhone is “off.”

    The mechanism takes advantage of the fact that wireless chips related to Bluetooth, Near-field communication (NFC), and ultra-wideband (UWB) continue to operate while iOS is shut down when entering a “power reserve” Low Power Mode (LPM).

    While this is done so as to enable features like Find My and facilitate Express Card transactions, all the three wireless chips have direct access to the secure element, academics from the Secure Mobile Networking Lab (SEEMOO) at the Technical University of Darmstadt said in a paper.

    “The Bluetooth and UWB chips are hardwired to the Secure Element (SE) in the NFC chip, storing secrets that should be available in LPM,” the researchers said.

    “Since LPM support is implemented in hardware, it cannot be removed by changing software components. As a result, on modern iPhones, wireless chips can no longer be trusted to be turned off after shutdown. This poses a new threat model.”

    The findings are set to be presented at the ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec 2022) this week.

    The LPM features, newly introduced last year with iOS 15, make it possible to track lost devices using the Find My network. Current devices with Ultra-wideband support include iPhone 11, iPhone 12, and iPhone 13.

    A message displayed when turning off iPhones reads thus: “iPhone remains findable after power off. Find My helps you locate this iPhone when it is lost or stolen, even when it is in power reserve mode or when powered off.”

    Malware

    Calling the current LPM implementation “opaque,” the researchers not only sometimes observed failures when initializing Find My advertisements during power off, effectively contradicting the aforementioned message, they also found that the Bluetooth firmware is neither signed nor encrypted.

    By taking advantage of this loophole, an adversary with privileged access can create malware that’s capable of being executed on an iPhone Bluetooth chip even when it’s powered off.

    However, for such a firmware compromise to happen, the attacker must be able to communicate to the firmware via the operating system, modify the firmware image, or gain code execution on an LPM-enabled chip over-the-air by exploiting flaws such as BrakTooth.

    Put differently, the idea is to alter the LPM application thread to embed malware, such as those that could alert the malicious actor of a victim’s Find My Bluetooth broadcasts, enabling the threat actor to keep remote tabs on the target.

    “Instead of changing existing functionality, they could also add completely new features,” SEEMOO researchers pointed out, adding they responsibly disclosed all the issues to Apple, but that the tech giant “had no feedback.”

    With LPM-related features taking a more stealthier approach to carrying out its intended use cases, SEEMOO called on Apple to include a hardware-based switch to disconnect the battery so as to alleviate any surveillance concerns that could arise out of firmware-level attacks.

    “Since LPM support is based on the iPhone’s hardware, it cannot be removed with system updates,” the researchers said. “Thus, it has a long-lasting effect on the overall iOS security model.”

    “Design of LPM features seems to be mostly driven by functionality, without considering threats outside of the intended applications. Find My after power off turns shutdown iPhones into tracking devices by design, and the implementation within the Bluetooth firmware is not secured against manipulation.”

    Source

Virtual Identity