A wider application of artificial intelligence (AI), demand for greater efficiency and innovation in data centres, consolidation, wider accepatance of cryptocurencies, higher focus on security and sustainability are the some of the likely key trends in the technology sector in 2024, experts said.

Khaleej Times spoke to a cross section of tech industry leaders for their predictions in the year ahead. Excerpts:

Morey Haber, Chief Security Officer, BeyondTrust

Dedicated apps face extinction: Starting in 2024, AI may be used for trusted connections, leading to the gradual (or perhaps even rapid) obsolescence and eventual decommissioning of banking, travel, and search apps. Bank statements, full travel itineraries, and corporate reports could all be handed off to AI. Some applications may remain as trust-based connectors, but essentially, we would be entering a world of cyber-secure digital valets and valet services run by AI, with requests issued by voice command and results returned in richer and richer formats as technology evolves.

Fred Lherault, CTO Emerging, Pure Storage

Demand for greater efficiency and innovation in data centres to grow as capacity crunch hits: Many organisations that are reliant on data centres are reporting that their most pressing issue right now is one of capacity. A growing number of data centres are full, and don’t have the space or power available to deploy new platforms. In 2024, this will result in widespread efforts to achieve efficiency gains, even on existing data centre platforms, as this is the only way they will be able to reclaim space and power to accommodate the use of new technologies inside the data centre.

To optimise the sustainability of existing data centre footprints, we’ll see operators looking to switch to new, more power efficient technology, with smaller space and cooling requirements. This is in essence extending the life of the data centre — an essential factor when considering the need for new technologies in the wake of the rise of AI.

Cathy Mauzaize, President EMEA, ServiceNow

Technologies will support growth of new business models and revenue streams: Innovation is always top of mind for any forward-thinking business leader. However, the pace of this is faster than ever and isn’t showing signs of slowing down. The accelerating pace of innovation will make it necessary for organisations embrace new business models and revenue streams in the coming year – and the CIO may just sit at the heart of this.

According to the 2023 State of the CIO Report, more than two thirds (68 per cent) of CIOs acknowledge that the creation of new revenue streams is among their job responsibilities. Incumbents will leverage technology to reach customers in novel ways, while agile startups will continue to design disruptive offerings from scratch. With change coming faster than ever, companies will demand faster time-to-value when investing in new solutions.

Collaboration will become key as organisations look to partners with complementary capabilities to jointly develop new products, services – changing the traditional vendor-customer relationship. The companies that openly share knowledge and expertise with their network will be best positioned to thrive amidst the coming wave of technology advancement.

Hadi Jaafarawi, Managing Director – Middle East, Qualys

From consolidation to simplification: Global averages for the number of security tools installed by organizations reach as high as 90. Thus far, consolidation has been the go-to strategy for CISOs, including those based in the UAE. But in 2024, security leaders will go one step further and look to simplify security processes. Ultimately, the tools that best help organizations de-risk their business by effectively measuring, communicating and eliminating cyber risk will be the ones that win out. We will see automation help SOCs to find the genuine risks and ignore the false alarms by taking a risk-based approach to security alerts and allowing AI to triage threats. Remediation will become more automated, allowing cyber talent to be put to its best use.

Christian Borst, CTO EMEA, Vectra AI

Widespread LLM usage will fade away, but deep fakes will skyrocket: Many organisations are exploring ways to use Large Language Models (LLMs) following the initial wave of hype this year. But when you scratch beneath the surface, it’s clear the novelty factor will soon evaporate. LLMs are typically quite difficult to use, because they are unable to understand context or provide reliable outputs, so the wider practical use of LLMs is restricted. Next year we will therefore see businesses scale back their use of LLMs as they wait for these tools to become more functional and user-friendly.

Threat actors will face the same issues with using LLMs, so we likely won’t see much complex activity like AI generating malicious code. But, we can expect cybercriminals to harness generative AI to create more realistic and sophisticated deep fakes. This will give them a better chance of tricking users into giving up sensitive data or clicking on something malicious through more convincing audio or visual phishing lures.

Matt Cloke, CTO, Endava

Technology focus to pivot from maintaining to gaining: While the past few years have seen a steady focus on initiatives that embed operational stability into their systems, businesses are now preparing to ramp up innovation to gain a competitive advantage. Endava data shows that only 15 per cent see operational stability as the top priority for their technology, and cutting-edge tools will take precedence. AI will be at the top of the agenda, as almost 80 per cent of organisations are prioritising it highly or very highly, and only 2 per cent don’t see it as a priority. As well as a growing appetite for AI, 75 per cent have their sights set on big data and predictive analytics as key technologies to help them adapt in the ever-changing business world.

However, as companies weigh up their strategic investments across these areas, they will have to be careful to strike the balance between innovation and stability to prevent disruption. Assessing the integrity of existing processes and systems will be critical to seeing whether they can successfully adopt cutting-edge tools in an iterative manner, or whether these changes will potentially upend infrastructure.

Caroline Malcolm, VP of Global Public Policy, Chainalysis

Crypto use cases rival that of traditional finance: 2023 proved to the world that the UAE government recognises the value and potential for digital assets to deliver a wide range of benefits to both businesses and consumers. With regulations in place, businesses will now be keeping a close eye on what technologies government entities are utilising to manage the risks associated with digital assets.

If past experience is anything to go by, the government will set the pace, and the private sector will follow suit. We have certainly seen this at Chainalysis. Our partnership with the UAE Ministry of Artificial Intelligence, Digital Economy and Remote Work Applications to provide virtual training programmes for the country’s government entities rapidly led to conversations with prominent financial sector entities who are eagerly waiting in the wings to launch their crypto-centric services. Through 2024 we can expect governments and enterprises to leverage the regulatory clarity in the country and invest in responsibly building out the ecosystems.

Against this backdrop, the UAE’s crypto sector looks to be in high-gear on a highway to success. And as more institutions leverage crypto as an asset class, the more innovation we will see in the segment. The potential use cases could rival, or even outdo, traditional finance in placing the UAE at the forefront of global innovation.

Kurt Muehmel, Everyday AI Strategic Advisor, Dataiku

Generative AI regulation conundrum for organisations: Despite the fact that regulators are moving much slower than the technology itself, we expect to see more work surrounding the regulation of Generative AI like ChatGPT. However, while enterprises definitely anticipate that there will be future regulation, they don’t know exactly what it will be, which puts the onus on them to be prepared and implement good AI Governance practices They will need to be able to communicate clearly what they are using Generative AI for, how it’s being used, and what they are doing to avoid shadow AI. While no one knows for certain what the regulation is going to be, we do know that it’s important to comply as quickly as possible, and that will be difficult without good AI Governance practices in place.

Jeff Stewart, VP of Global Solutions Engineering, SolarWinds

Observability across the full tech stack will become a priority: According to SolarWinds research, the typical enterprise loses more than $13M annually to costs associated with the nine brownouts or outages experienced each month. Despite this, nearly half of IT professionals surveyed lack visibility into the majority of their organisation’s apps and infrastructure. AI-powered observability solutions address this by collecting data to provide information on what’s not performing as expected and why—allowing teams to take a proactive approach to eliminating downtime, innovating, and exceeding customer expectations.

Mena Migally, Regional Vice President – Emerging EMEA, Riverbed

Young Generation Employees Will Shape Enterprise IT: In a recent Riverbed survey, 64 per cent of decision-makers in the UAE and Saudi Arabia said younger generation employees are the most demanding of IT’s time, and nearly all (97 per cent) of respondents believing they will need to provide more advanced digital experiences to meet their needs. If regional businesses are to attract top talent, they have to ensure their tech investments align with the expectations of younger generation employees.

And what are these tech investments likely to be? UAE and Saudi leaders believe that AI (54 per cent), Cloud (50 per cent), Digital Experience Management solutions (43 per cent), Application/Network Acceleration technology (36 per cent), and Automation (34 per cent) are crucial for organizations looking to remain competitive in today’s marketplace.

Joseph Carson, Chief Security Scientist & Advisory CISO, Delinea

AI Compliance Accelerates: In 2024, the landscape of cybersecurity compliance is expected to evolve significantly, driven by emerging technologies, evolving threat landscapes, and changing regulatory frameworks. Privacy regulations like the GDPR, CCPA and the UAE’s Data Protection Law have set the stage for stricter data protection requirements. We can expect more regions and countries to adopt similar regulations, expanding the scope of compliance requirements for organizations that handle personal data.

Artificial intelligence and machine learning will play a more prominent role in cybersecurity compliance. These technologies will be used to automate threat detection, analyse vast datasets for compliance violations, and provide real-time insights, making it easier for organisations to stay compliant.

Sohrob Kazerounian, Distinguished AI Researcher, Vectra AI

Multi-modal models: Although the name might suggest that LLMs are solely oriented towards the ingestion and generation of language, LLMs are going multi-modal! The remarkable capabilities of generative AI to produce images / audio / video / etc., are increasingly going to be combined with LLMs. Not only will you be able to interact with the LLM of your choice simply via natural language, but you will be able to upload images alongside text, speak to the model through voice chats, and have the model generate multi-modal outputs. This flexibility of inputs and outputs is not only incredibly useful on its face, but opens the door to reasoning in incrementally better, and increasingly more human ways.

Rafael Pena, Researcher, Trellix Advanced Research Center

AI-generated voice scams for social engineering: The rise of scams involving AI-generated voices is a concerning trend that is set to grow in the coming year, posing significant risks to individuals and organisations. These scams often involve social engineering tactics, where scammers use psychological manipulation techniques to deceive individuals into taking specific actions, such as disclosing personal information or executing financial transactions. AI-generated voices play a crucial role in this, as they can instill trust and urgency in victims, making them more susceptible to manipulation.

Recent advancements in artificial intelligence have greatly improved the quality of AI-generated voices. They can now closely mimic human speech patterns and nuances, making it increasingly difficult to differentiate between real and fake voices. Furthermore, the accessibility and affordability of AI-voice generation tools have democratized their use. Even individuals without technical expertise can easily employ these tools to create convincing artificial voices, empowering scammers.

Scalability is another key factor. Scammers can leverage AI-generated voices to automate and amplify their fraudulent activities. They can target numerous potential victims simultaneously with personalized voice messages or calls, increasing their reach and effectiveness. Detecting AI-generated voices in real-time is a significant challenge, particularly for individuals who are not familiar with the technology. The increasing authenticity of AI voices makes it difficult for victims to distinguish between genuine and fraudulent communications. Additionally, these scams are not limited by language barriers, allowing scammers to target victims across diverse geographic regions and linguistic backgrounds.

Phishing and vishing attacks are both on the rise. It’s only a logical next step that as the technology for AI-generated voices improves, threat actors will leverage these applications with victims on live phone calls — impersonating legitimate entities to amplify the effectiveness of their scams.

Copyright © 2022 Khaleej Times. All Rights Reserved. Provided by SyndiGate Media Inc. (Syndigate.info).