Qlik recently published its predictions of the main data analytics moving forces for 2020 and we have summarized it for you. Just read the article below to find out more. Eventually, you can register to view the webinar about 2020 Data & BI Trends on-demand. You can do so HERE.

Technology is getting more powerful every day. We have countless ways to collect information and connect people, yet the world has become more fragmented than ever before. It’s ironic – and it’s holding businesses back.

When it comes to data, analytics is no longer enough, Qlik states. You need both synthesis and analysis to find meaningful insights into the masses of data available. And those who embrace both will be primed to lead the way.

Here is the list of TOP 10 predictions in data & business intelligence for 2020 according to Qlik.

1. Becoming a real-time enterprise is no longer optional.

If you’re going to lay your data mosaic, you need information delivered to the right place at the right time. The world’s leading organizations are now operating in real-time – the speed needed to monitor the efficacy of marketing campaigns, detect anomalies around fraud, provide healthcare and humanitarian services, conduct on-the-spot personalization, or even optimize supply chains. The convergence of three recent breakthroughs will facilitate all of this in a significant way in 2020.

• High speeds, all the time, everywhere.
Thanks to 5G and IPV6, we now have access to ultra-connectivity.

• Infinitely scalable workloads, where you need them.
As everything is moving to the cloud, Kubernetes is the rising star – allowing the right workloads to run in the right places, even on edge devices.

• Powerful streaming architecture.
Change data capture (CDC) and real-time data streaming enabled by solutions like Apache Kafka to efficiently ingest and process data – with low latency and high scale.

2. Big data is just data. Next up? Wide data

“Big data” is a relative term and a moving target. Is your current technology robust enough to handle it? If you need to replace or significantly invest in extra infrastructure to handle your data, the answer is no – and you have a big data challenge on your hands.

With infinitely scalable cloud storage, that restriction is gone. And it’s now easier than ever to perform in-database indexing and analytics. We have time-tested tools that ensure data is in the right place – and if not, that it’s easy to move. Technology has essentially caught up; the mysticism of “big data” has finally dissipated.

What’s next? Highly distributed “wide data.” With data formats now more varied and fragmented, we need new ways to deal with data that’s not only big, but wide. The need to handle different types of data has driven an explosion of databases: from 162 in 2013 to 342 in 2019,5 evenly split between commercial and open source. Combinations of data eat big data for breakfast, and companies that can achieve synthesis of their varied, fragmented data sources will stand strong.

3. Graph analytics and associative technology surpass SQL

For decades, we’ve accepted solutions that aren’t optimized for analytics. SQL databases with rows and columns are designed for data entry. Relational analytics tools are based on the relationships between data tables, meaning users can only explore data via predefined connections. These approaches not only prevent people from finding unexpected connections – they make fragmentation worse.

Alternative approaches like graph analytics and associative technology allow us to follow our curiosity and delve deeper. Though different technologies, they’re based on the same concept of “nodes, relationship, edge,” focusing on analyzing the natural associations within data – not the data-table relationships defined by someone manually. This type of analytics allows us to address much bigger problems and get better results, especially when AI is applied.

4. DataOps plus self-service is the new agile

Self-service analytics, enabled by data discovery tools, brings business users closer to the answers they need. But that same agility hasn’t been cultivated on the data management side – until now.

Taking inspiration from DevOps, DataOps is an automated, process-oriented methodology that improves the quality and speeds the cycle time of data management for analytics. It automates data testing and deployment, in real-time, thanks to technology like change data capture (CDC) and streaming data pipelines. It also leverages on-demand IT resources to provide continuous data delivery. Today, 80% of data should be delivered to business users in this systematic way. When that happens, the need for standalone self-service data preparation will subside.

With DataOps on the operational side and self-service on the business user side, companies will experience their data flowing more efficiently across the entire information value chain – enabling synthesis and
analysis for laying the data mosaic.

5. Active metadata catalogs are the connective tissue for data and analytics.

Data sets are increasingly wide and distributed – which poses a big challenge for enterprises, as all of that data needs to be inventoried and synthesized. Left to its own devices, data can go stale fast. Data catalogs can help, so it’s no surprise that demand for them is skyrocketing.

A promising solution on the rise is machine learning–augmented metadata catalogs. They move data from passive to active, keeping it adaptive and changing – even across hybrid/multi-cloud ecosystems. Essentially, these metadata catalogs provide the connective tissue and governance needed to handle the agility DataOps and self-service provide. They also include information personalization – an essential component to generating relevant insights and tailoring content. But in order to incorporate fragmented, distributed data, a catalog must also work beyond the environment of your analytics tool of choice.

6. The emergence of Data Literacy as a service.

It’s no longer enough to drop tools on users and hope for the best. Establishing an inclusive system of synthesis and analysis that thrives on participation will help, but no data and analytics technology or process in the world can function if people aren’t on board. A critical component for pushing BI tools beyond the industry standard 35% adoption rate is to get people confident in using data. The clear solution? “DLaaS.”

In 2020, as scaling data expertise becomes more critical, businesses will expect to partner with vendors on this journey. What’s needed is a combination of software, education, and support – as a service – with outcomes in mind. Driving adoption to 100% makes data a part of every business decision. To reach this goal, the best place to start is by diagnosing where an organization falls on the data literacy spectrum, then working holistically toward making the necessary improvements.

7. Multifaceted interactions will move us beyond search

Search and voice, powered by chatbots, have emerged as powerful interfaces to query data – especially with mobile applications. But they aren’t enough. We need to combine natural language with tried-and-true approaches for data queries, such as visual analysis and filtering in dashboards, to build the foundation of a multifaceted interface.

In 2020, more immersive, multifaceted interactions will evolve to enable expressions – and even thoughts – to control or query devices. Inventions around AR/VR, wearable sensors, and machine learning software help machines understand human expressions. Plus, neuroscience has enabled the transmitting of electrical signals from the brain to a computer input. These inventions will evolve the way humans experience and interact with data. This can hold great benefits for all – especially disabled people – but we must also be aware of how it can be used for ill. We must use them responsibly.

8. Ethics and responsible computing are now critical

Most technological leaps improve the world around us in some way and bring us collectively to a better place. But some “advances” can be a cause for serious concern. How do algorithms affect our privacy? Our free will? From incorrect use of personal data to auto-profiling, the temptation for exploitation can be hard to resist.

And then there are regulations like the US Cloud act and GDPR, and whether your cloud strategy is in compliance. Borderless corporations are especially affected by this, as rules vary from country to country. Today, a hybrid multi-cloud approach is no longer optional – it’s a must.

The time has come for a broader sense of corporate responsibility. Beyond compliance, companies must gain and hold the trust of their customers. Once an organization is seen as crossing a privacy line, the damage to its brand could be irreparable. The question isn’t only whether something can be done, but whether it should be done. Establishing a digital ethics board at your organization is one way to improve your chances of minimizing risk and maximizing reward. Longer-term, organizations need to shift focus from shareholders to stakeholders.

9. “Shazam” for data: What’s possible

Shazam, the now-famous musical service that identifies songs through your device’s microphone, has kicked off a category of discovery. Google Lens uses deep learning and visual analysis to identify plants and animals, read and translate text, and more. Amazon is launching a similar technology for finding clothes simply by analyzing a photo. But can we “Shazam” our data?

In 2020, AI embedded across the whole information value chain will allow algorithms in analytic systems to get better at fingerprinting our data, finding anomalies, and (not the least) suggest new data to be analyzed. We’ll be able to point to a data source and see where it came from, who is using it, how much it’s changed, whether its quality is good, and more. It will allow more insights from data, no matter its size – and combine synthesis with analysis.

10. Independence vs. stack: The sequel

Last year saw significant consolidation in the data and analytics space, with large cloud data and application stacks acquiring smaller analytics vendors. The goal for this, presumably, is to get more control over customers and their data and gradually monetize it. Sound familiar? That’s because a decade ago, on-premise data and application stacks went on a similar blitz. During that time, R&D efforts were focused on technology integration at the expense of innovation. The good news? That blitz sparked the emergence of a new wave of vendors that could keep their customers’ data and analytics independent.

In 2019, we also saw cloud costs ballooning for customers locked into one ecosystem – a tarnish on the cloud’s silver lining, so to speak. Perhaps the bigger concern, however, is once customers move their data in, can they move it out? And how much would it cost? Today, hybrid and multi-cloud platforms are necessary. Data and analytics are the lifeblood of modern-day enterprises and simply too important to belong to just one stack. In fact, most (if not all) organizations have multiple applications and data sources stored in a variety of locations. We’ve seen this movie before, and we all know how it ends: Companies need independent analytics partners that can connect silos – and help lay data mosaics that foster true business growth.

source: eBook 2020 Data & BI Trends: Analytics Alone Is No Longer Enough

EMARK Online DataTalks – check out also our other webinars

Superfast insights for agile steering of the business

    Name and surname

    Company

    E-mail

    I consent to the processing of my personal data for the purposes of marketing activities by EMARK Group.
    For more information, including a specification of the EMARK Group and the scope of the specific marketing activities for which you consent, please see the Personal Data Protection and Privacy HERE.