Designing, Operating and Managing an Enterprise Data Lake

Brochure Image

by Mike Ferguson download a PDF brochure Download Event Brochure

Description

Most organisations today are dealing with multiple silos of information. These include Cloud and on-premises based transaction processing systems, multiple Data Warehouses, Data Marts, Reference Data management (RDM) systems, Master Data Management (MDM) systems, Content Management (ECM) systems and more recently Big Data NoSQL platforms such as Hadoop and other NoSQL databases.

In addition the number of data sources is increasing dramatically especially from outside the enterprise.  Given this situation it is not surprising that many companies have ended up managing information in silos with different tools being used to prepare and manage data across these systems with varying degrees of governance. 

In addition, it is not only IT that is now integrating data. Business users are also getting involved with new self-service data wrangling tools.  The question is, is this the only way to manage data? Is there another level that we can get reach to allow us to more easily manage and govern data across an increasingly complex data landscape? 

 

This seminar looks at the challenges faced by companies trying to deal with an exploding number of data sources, collecting data in multiple data stores (Cloud and on-premises), multiple analytical systems and at the requirements to be able to define, govern, manage and share trusted high quality information in a distributed and hybrid computing environment. 

It also explores a new approach of how IT data architects, business users and IT developers can collaborate together in building and managing an Enterprise Data Lake to get control of your data. This includes data ingestion, data discovery, data profiling and tagging and publishing data in an information catalog.

It also involves refining raw data to produce Enterprise Data Services that can be published in a catalog available for consumption across your company.  We also introduce multiple Data Lake configurations including a centralised Data Lake and a ‘logical’ distributed Data Lake as well as execution and governance across multiple data stores.

It emphasises the need for a common collaborative process and common approach to governing and managing data of all types. 

What you will learn

  • How to define a strategy for producing trusted data as-a-service in a distributed environment of multiple data stores and data sources
  • How to organise data in a centralised or distributed data environment to overcome complexity and chaos
  • How to design, build, manage and operate a distributed or centralised Data Lake within their organisation
  • The critical importance of an information catalog for delivering data-as-a-service
  • How data standardisation and business glossaries can help define the data to make sure it is understood
  • An operating model for effective distributed information governance
  • What technologies they need and implementation methodologies to get their data under control 
  • How to apply methodologies to get Master and Reference Data, Big Data, Data Warehouse data and unstructured data under control irrespective of whether it be on-premises or in the Cloud

Main Topics

  • Strategy & Planning
  • Methodology & Technologies
  • Data Standardisation & the Business Glossary
  • The Data Refinery Process
  • Organizing the Data Lake
  • Refining Big Data & Data for Data Warehouses
  • Information Audit & Protection – the forgotten side of Data Governance