Global AI and Data Science

Global AI & Data Science

Train, tune and distribute models with generative AI and machine learning capabilities

 View Only

TechXchange Conference 2023 - Labs for Data and AI Tracks

By NICK PLOWDEN posted Thu July 06, 2023 08:55 AM

  

TechXchange Conference 2023 Data and AI Tracks - Overview

We know that you’ve been eagerly awaiting on more details on the technical content planned for the upcoming IBM TechXchange Conference 2023, happening in Las Vegas September 11-14. Here is a preview of some of the labs we are planning. This is only a small set of the 1000+ sessions and labs that will offer you the opportunity to increase your technical knowledge and capabilities. Please note that the titles and lab descriptions are still being refined, but this gives you a sense of what is coming. 

Register today for the first TechXchange event for technologists using IBM products and solutions. Also, save $300 with early bird pricing if you register before July 21st. See you there!

Here is the list of labs for AI and Data Science portfolio in the AI and Data Science Tracks

Title Abstract
How to deploy an AI model on z/OS for ultra-fast inferencing This Hands-on lab demonstrates the steps required to deploy an AI model on z/OS using the latest AI technologies available on the platform.  The participant will take a credit card fraud detection model from its creation all the way to getting an inference score:  1) Train a model written in Python in a Juptyr notebook using the open-source Tensorflow AI framework; 2) Export the model to the Open Neural Network Exchange (ONNX) format; 3) Import that model into Watson Machine Learning for z/OS (causing an inference program to be generated which leverages the IBM z16's state-of-the-art on-chip Integrated Accelerator for AI); 4) Invoke this z/OS-served inference program from a z/OS application to gain an inference score that is used to make a decision to accept or reject the purchase.
Hands on Introduction to Generative AI/ Large Language Models (LLMs) The latest generation of AI innovation, large language models and generative AI, has created a paradigm shift in how organizations develop, deploy and scale AI to production. Curious how ChatGPT works under the covers? Attend this session!

In this hands-on lab you will learn about the different pretrained foundation, large language models and best practice on what models addresses different types of business use-cases. You will perform hands-on prompt-engineering including techniques like one-shot and multi-shot model tuning with your data. Finally you will integrate your tuned model into an end user sample application, all in 60 minutes.
Connect Watson Assistant to your Business Data using Custom Extensions  This hands on lab is designed to show you how to create an API and document it so that it can be used by Watson Assistant.  The lab will start by creating a custom API that provides user information when requested.  After local testing is complete, the next step will be to create an OpenAPI specification that can be read by Watson Assistant.  Finally, the API will be connected to the Assistant and tested.  
These advanced techniques can be used for any situation or customization of an existing API.  At the end of the lab, users can incorporate this knowledge into their custom demonstrations, POCs, and MVPs.
General programming skills and Watson Assistant Action skills necessary. 
Conversational AI with Low-Code/No-Code Development and multi channel deployment Want to learn to build a digital virtual assistant with backend integration examples with a low-code/no-code environment? Come put your hands on the keyboard and walk through different lab exercises that show you why our tooling can be managed by department Subject Matter Experts instead of IT experts.
Virtual Agent with Generative AI: A Hands-on Lab with Watson Assistant, Discovery and NeuralSeek Learn how to leverage generative AI to drastically simplify building and maintaining virtual agents using Watson Assistant, Watson Discovery, and NeuralSeek. Participants will gain hands-on experience in building a virtual agent that provides human-like, trustworthy responses based on the customer's corporate data. Attendees will have a production-ready virtual agent to explore for their company by the end of the session. This session is ideal for individuals interested in the future of AI, conversational interfaces, and virtual agents. No prior experience with AI or programming is necessary, making the lab accessible to a diverse audience. Don’t miss this exciting opportunity to explore the cutting-edge intersection of AI and virtual agents and gain valuable skills in the process!
AI Model Governance Learn how IBM OpenPages can help you define and establish a model management processes that will ensure adequate risk and compliance across the enterprise.
How to pick the right foundation model for your use case: IBM, Open source, Hugging Face etc. With IBMs new foundational models and with over 11,500 text generative AI models available on Hugging Face, which one should you pick for your use case?

In this lab session, we walk you through the different types of models and illustrate use cases they are optimized for, balancing accuracy and creativity with trust and cost and show you how to work with the models in Watsonx.ai
Implement MLOps for open source data science in Watson Studio or watsonx.ai Discover the power of Watson Studio, an all-in-one data science platform that streamlines every aspect of the MLOps process, from model development to deployment and monitoring. Join our hands-on lab and gain practical experience in implementing MLOps using Watson Studio. Whether you're utilizing Cloud Pak for Data or the advanced capabilities of watsonx.ai, this session will equip you with the skills and knowledge needed to successfully navigate the MLOps journey. 
Analyzing Complex Documents in a Generative AI World Take advantage of award-winning Watson Natural Language Processing (NLP) capabilities by adding prebuilt enrichments to your documents with Watson Discovery and make more informed decisions.
Entities: Recognizes proper nouns such as people, cities, and organizations that are mentioned in the content.
Keywords: Recognizes significant terms in your content.
Part of Speech: Identifies the parts of speech (nouns and verbs, for example) in the content.
Sentiment: Understands the overall sentiment of the content.
Contracts enrichment: identifies contract-related elements in a document
Smart Document Understanding (SDU): Wizzard created models that learns about the content of a document based on the document's structure.
Creating, Testing, and Deploying Machine Learning Models with IBM Watson Studio - Lab 1 This lab takes the data scientist (or business analyst) on a journey from the creation of several machine learning models to its deployment and testing. Various tools and services as well as programming and graphical user interfaces are used in the process. The lab ends with the sharing of assets on GitHub and a brief discussion on governance and stewardship.
Administration of IBM Cloud Pak for Data - Lab 1 This lab is designed for professional administrators of Cloud Pak for Data, and it is intended for practicing the administration skills. In this course, you follow Emily, the Cloud Pak for Data administrator, in a fictional financial services corporation, as she performs administration tasks. The purpose of this lab is to explain the most important administration activities that are related to Cloud Pak for Data environment. From the presented demonstrations, you learn step by step how to perform each task. You verify the acquired knowledge by completing the designated hands-on exercises.
Use low code data science in Watson Studio Low code data science tools in Watson Studio can be used to implement typical data science use cases that range from preparing and analyzing data to creating sophisticated models. In this session you will learn how to use Modeler Flows and AutoAI to implement end-to-end data science use case.
Getting Started with Generative AI in watsonx.ai Interested in implementing generative AI, but not sure how to get started?  In this session, you will learn how to identify suitable use cases for generative AI and gain hands-on experience using watsonx.ai to implement two most popular generative AI use cases: Question and Answer and Summarization. You will not only explore prompt tuning techniques but also learn how to deploy your model effectively, enabling seamless integration with line of business applications. Join us in this lab session and take the first steps towards unlocking the potential of generative AI in your organization!
Model Lifecycle Governance in Watson Studio and WatsonX.ai Model lifecycle governance is crucial for organizations who are looking to harness the full potential of AI while upholding ethical standards and meeting regulatory requirements. In this lab, we will explore the critical aspects of managing and governing machine learning models throughout their lifecycle. And you will gain knowledge and practical skills necessary to introduce visibility into your model lifecycle, enhance model transparency and establish robust model governance frameworks to detect and address model issues, mitigate bias and ensure fairness in your AI systems.

Here is the list of labs for Business Analytics portfolio in the Data Track

Title Abstract
CA-101 Go from spreadsheets to dashboard quickly with Cognos Analytics The art of quickly visualizing your data with self service dashboard creation using IBM Cognos Analytics
CA-201 Actionable Insights with Cognos Analytics Assistant Leveraging AI to deliver insights from your data using the Cognos Analytics Assistant right from your dashboard
Analytics Content Hub - 101 Speed to information Leverage IBM Analytics Content Hub to bring all your disparate BI reporting into one lens
PA 201 - Modeling your analytics data with IBM Planning Analytics Workbench Modeling Model your business fast and easily with IBM Planning Analytics Workbench Modeling
Getting the most from PAW - Workspace This session will provide a 90 minute hands on lab of how to use Planning Analytics Workspace including the workflow capability called Apps and Plans. You will learn how to create an application, as well as a plan to guide your users through a planning cycle using Planning Analytics.
Advanced techniques in using PAW - Planning Analytics Workspace This session will cover a deeper understanding of how to leverage Workspace for AI Forecasting in planning. We will cover the uni-variate as well as Baseline forecasting capability in workspace. The multi-variate forecasting along with Forecast If will also be covered as it is the latest advanced AI capability in Workspace.
Get Hands on with the all new Universal Reports in PAfE This hands-on lab will allow users to build and get a real understanding of the power of Planning Analytics for Excel. This lab will focus on the new Universal reports and how to leverage hierarchy awareness. It will also include new features such as TM1Set, and DefineCalc.
Advanced Intelligent Document Processing with IBM Watson Discovery - Lab 1 In this lab, you will explore intelligent document processing with Watson Discovery user interface. You learn how to create regular expressions, import rule-based models, customize query results, conduct web crawling, and teach your domain language to Watson Discovery to enhance the relevance, and accuracy of your results.
Advanced Intelligent Document Processing with IBM Watson Discovery - Lab 1 (Repeat) In this lab, you will explore intelligent document processing with Watson Discovery user interface. You learn how to create regular expressions, import rule-based models, customize query results, conduct web crawling, and teach your domain language to Watson Discovery to enhance the relevance, and accuracy of your results.
Mastering Document Retrieval with IBM Watson Discovery API - Lab 1 In this comprehensive lab, you will learn how to harness the power of Watson Discovery API to build a robust document retrieval system. From regular expressions to advanced techniques like Discovery Query Language, relevancy training, and result modification, this lab will equip you with the skills needed to create highly accurate and efficient search applications.
Mastering Document Retrieval with IBM Watson Discovery API - Lab 1 (Repeat) In this comprehensive lab, you will learn how to harness the power of Watson Discovery API to build a robust document retrieval system. From regular expressions to advanced techniques like Discovery Query Language, relevancy training, and result modification, this lab will equip you with the skills needed to create highly accurate and efficient search applications.



Here is the list of labs for the Data Management portfolio in the Data Track

Title Abstract
Implement Data Observability with Databand Data observability is needed to improve quality of an organization’s data products. Databand, IBM’s latest acquisition in the Data and AI portfolio, achieves data observability through operational monitoring of data pipelines. In this lab you will learn how to monitor, identify, and resolve data pipeline issues in order to deliver reliable data to consumers.
Using Db2Shift to move Db2 databases to a Db2U Container Deploying an existing Db2 to OpenShift, Kubernetes, or Cloud Pak for Data usually involves some form of export and import and a lot of work! The new Db2 Shift tool moves your on-premises database directly to the Cloud - with no exporting of data! Db2 Shift can migrate your 10.5, 11.1 and 11.5 database directly into a Db2 container with no additional effort. This hands-on lab will take you through the IBM Db2 Shift program and how it can help modernize your Db2 databases quickly and efficiently! The lab covers the following topics:
- How to install Db2u on a Kubernetes cluster
- An overview of the Db2 Shift program
- Shift Db2 11.5 database to Db2u on Kubernetes
- Shift Db2 11.5 database via clone and deploy functions
- Shift Db2 11.1 Columnar database to a new Db2 Instance
And more!
Maximize your Governance Framework with End-to-End Data Lineage  In this session, you will get hands-on experience with Cloud Pak For Data's data lineage capabilities. We will explore how to easily configure and visualize lineage from a variety of databases, ETL tools, and BI Reporting tools including PostgreSQL, Db2, DataStage, Tableau, and PowerBI.
Use AI to Improve the Performance of your Db2 for z/OS Database Learn how IBM Db2 AI for z/OS enhances usability, improves operational performance and maintains the health of IBM Db2 for z/OS systems.

You will see how Db2 AI's SQL query optimization, distributed connection control, system assessments and performance insights can be leveraged to improve the performance and stability of your database and free up your DBAs' for more critical tasks. IBM Db2 AI uses machine learning to learn from your unique operating environment to generate recommendations for performance improvements, warn of abnormal system behavior, and optimize SQL access paths.
Match 360 Deep Dive - Integration with DataStage NextGen This lab will demonstrate how to integrate Match 360 and DataStage Next Gen to bring in data from multiple sources using a default & generic data model. It will also cover working with a generic data model created in Match 360 and setting up matching algorithms to support the generic entity type.
Want clean Address Data Quality?  Get your hands dirty! We all need better address data quality.  Whether your new, or an expert, please join us for a hands-on workshop where you can meet IBM’s address data experts.
Ask them anything about address data quality, location, and geocoding.  They will help you maximize the benefits you derive from the IBM address verification interface.
If you are brand new to Address data quality, or just need a refresher they can give you a demonstration on the address verification solution, explain how AVI is a great addition to your Data Fabric plans, or even assist in tuning your AVI stage jobs.
What's New in Watson Knowledge Catalog? Get hands on with the latest capabilities! In this hands-on session, you will explore the newest catalog and data quality features in Watson Knowledge Catalog including: Extending the catalog metadata model from directly within the UI, the new built-in Data Quality dashboard, data quality remediation workflow and more.
The power of adaptability and reusability of DataStage Next Generation It is important to design flexible, adaptable, and reusable DataStage flows for ETL processes. Such DataStage best practices should:
1. Use reusable components: such as sub-flows and routines across multiple jobs.
2. Use environment variables: to store global values across multiple jobs
3. Use job parameters and parameter sets: to pass values between jobs.
4. Use metadata: to store information about the data sources and targets.
5. Use stage parameters: to pass values between stages within a job.
With these best practices, you can change the ETL process without having to modify the flow design, and ensure consistency, accuracy, and re-usability.

In this session, we will cover job parameters, parameter sets and sub-flows in DataStage flow design.
Analyzing Airline Flight Delay Data with Db2 or Netezza and Jupyter Notebooks in IBM Cloud Since the return to travel after the COVID-19 pandemic, many people have experienced flight delays. This hands on lab will use data from the United States Department of Transportation that details flight delays for commercial airline flights. This data will be used to determine how flight delays for 2023 compare with the prior 4 years to determine if flight delays have increased or if it is just a misconception. After the table is created in Db2 on Cloud and the data is loaded, a Jupyter notebook will be used to query the data and visualize the data to assist in making assumptions about the data.
Lab attendees will be provided a lab guide, required files, and will need their own device along with an IBM Cloud account. IBM Cloud Lite plans used in the lab are free of charge.
DataStage NextGen In this hands-on session, you explore the latest features of IBM DataStage NextGen by building flows with various connectors of popular data sources. Incorporate processing power of your existing APIs and applications into the DataStage flow. Integrate your DataStage flow in your existing IT processes. Have insight of data quality observability via Databank integration.
Disaster Recovery solution for Netezza Performance Server using Netezza Replication Services 3.0 Netezza Replication Service 3.0 is complete re-vamp of older version of NRS 1.6 that allows customers to configure more than one  Netezza Performance Server systems for Disaster Recovery or Work load partitioning purposes.

This hands own session  will teach users on how to install, setup and add Netezza Performance Server databases for replication to between 2  Nodes using Netezza Replication Services 3.0 (NRS 3.0). And also introduces them to monitoring and troubleshooting tools of NRS 3.0.

The plan is to run 2 VM images on a laptop which will have NPS instances running along with NRS software.

The target audience will be Netezza server DBA as well as CTOs.
Data Quality SLA in Watson Knowledge Catalog: The bedrock of a successful Data Monetization strategy A data quality Service Level Agreement (SLA) specifies an organization’s expectations for response and remediation for data quality issues. This is important for regulatory reporting and to support data marketplace requirements. In this session you will learn how you can use Watson Knowledge Catalog 4.7 to automatically certify the data quality results of your critical data elements. You will see how to automatically detect critical data elements, analyze data, define data quality SLAs and automate remediation tasks with workflows. 
Hands-on with the new Db2 Object Storage and Open Data Formats support and watsonx.data integration In this hands-on lab session, take control of a Db2 Warehouse on Cloud database, and explore two brand new features. Native object storage support allows for the usage of object storage such as AWS S3 to store native Db2 table data, allowing for lower cost of storing data in your warehouse and increasing price-performance. Open Data Formats support allows for easy integration of enterprise data in a variety of data formats as DATALAKE tables, with implementations such as Apache Parquet support within the Db2 engine. The lab will cover the SQL and Db2WoC console usage of these features, and how Db2 DATALAKE tables are integrated with watsonx.data in a seamless manner.
Deep dive into Presto: Unlocking the power of distributed SQL with Presto In this lab you’ll get started with the basics of Presto, an open source, high performance SQL query engine now used by companies such Uber, Airbnb, Meta, Alibaba, and more for interactive ad hoc queries, reporting & dashboarding. Topics include:

- What is Presto and getting started
- How to write a Presto query
- What makes Presto fast?
- Best practices for querying with Presto
Explore the Lakehouse Developer edition with Presto, Spark, Flink & StepZen The IBM Lakehouse Developer edition is an easy to get started environment for you to play and experiment with the Lakehouse, even on your laptop or a VM.  You will learn what it means to use Python & Presto  to bring data into the Lakehouse, build applications quickly and gain insights from your data.  You can even grant access to other users to “share” your Developer instance of the Lakehouse and experiment with finely grained access controls.

As part of this session, you will also learn how to use Flink, Spark to ingest and access data from the Lakehouse. You will also exercise IBM Stepzen, the GraphQL powered Service, to learn how to develop  Lakehouse data powered analytics applications 
Virtualize disparate data sources via a Protected, & Governed View- Enabling Self-Service Analytics  In this hands-on-lab session you will create a virtualized data set from multiple different database types and locations, thus avoiding the need to move or copy data.  With Watson Knowledge Catalog, the lab will leverage automated Data Enrichment to highlight SPI, and PII data types.  Data protection rules will be created to specify what types of users can see what types of data.  Using 2 different personas you'll witness how user permissions, and data protection rules will allow some users access to PII data and not others - all without doing any explicit coding.  Learn how you can enable self-service analytics with a access to 360-degree view of data?that is governed, protected, cataloged and with rich metadata.
Deploy and Use two Db2 Warehouse on Cloud with Replication for Continuous Availability Setup and Use two Db2 Warehouse on Cloud instances and use them to failover or outages that include failures, migrations, upgrades as well as providing workload load balancing and query offloading for local production system performance.
We address all database and workload requirements; setup connectivity; activate replication; enable replication for the workload; and demonstrate how sites can be used for failover. 
Bring Db2 and Netezza to the Data Lakehouse Party This lab with highlight how to utilize native and external engines to process data stored in watsonx.data.  The lab will start with Ahana PrestoDB engine accessing data stored in the lakehouse, next external engines Db2 and Netezza will be used to to access the same data in the lakehouse and join across data native Db2 and Netezza.
The Wait Is Over! Unveiling The Next Gen Of Services for Db2 Administration & Development In this hand on lab, you will have the opportunity to experience a new way to manage your Db2 systems, execute, tune, stabilize your queries. You will also be exploiting advanced tuning features such as index advisor, access path advisor. The capability to administer your IDAA environment is also part of the exercise.  In addition, you will also have chance to experience how you can automate Db2 database changes thru a intelligent web user interface or via a pipeline. All the above is based on APIs enabled within the Unified Management Server.
Bring your on device to experience them all. 
Getting Started wtih watsonx.data In this hands-on lab, you'll learn the basics of using a watsonx.data lakehouse. Your lab work will include: querying lakehouse data from the cli and from third party tools; working with functions; using iceberg features like time travel; and exploring object storage. This session is intended for anyone who wants to be more comfortable with lakehouse concepts and operating in an open lakehouse environment. 

Here is the list of labs for the Data Lifecycle portfolio in the Data Track

Title Abstract
Building and scaling a resilient cloud native integration with IBM App Connect Enterprise and IBM MQ During this lab you will deploy, scale, and upgrade a resilient cloud native integration using IBM App Connect Enterprise and IBM MQ on Red Hat Openshift. You will start by deploying the solution using Openshift’s built-in pipeline technology called Tekton and see how each component is resilient to failure. An application upgrade will be rolled out with zero downtime and no affect to the end users experience. Finally, MQ and App Connect Enterprise will be scaled horizontally, and the traffic automatically balanced to handle an increase in traffic. 
GraphQL Zero to Enterprise Lab using StepZen and API Connect Multiple companies around the world are looking for a way to build, secure and scale GraphQL APIs. Are you ready to support them? Join us to learn how to move from Zero to Enterprise in the GraphQL domain. Initially you will learn how to use StepZen to create a a federated GraphQL server by pulling data from disparate sources (DB, REST, etc). After that you will use IBM API Connect to secure and manage the lifecycle of this federated GraphQL API. Come and learn how StepZen and IBM API Connect can help your clients in their API Management journey.
Introduction on Using API Connect and DataPower to Secure Your APIs In this lab session, you will learn how to use user-defined policies to enhance the security of APIs hosted by IBM API Connect using DataPower. You will learn how to configure DataPower to apply custom security policies to APIs, including implementing advanced security features such as content-based access control, rate limiting, and threat protection. By the end of the lab, you will have a deep understanding of how to leverage DataPower's advanced security capabilities to protect your APIs from various threats.
Modernize your integrations with cloud-native style deployment using  App Connect Enterprise on CP4I In this lab you will learn how to evaluate the container readiness of your existing IBM Integration Bus resources running in an on-premises VM environment using ACE Transformation Advisor. Your existing integration topology may be using the old style configuration such as MQ Server binding or configurable services which now need to be converted to newer configuration objects using policy projects. We will walk you through the steps to migrate your Integration flows to IBM Cloud Pack for Integration (CP4I) with refactoring where necessary by utilizing the recommendations from ACE Transformation Advisor. You will also learn how to scale your applications in containers using replicasets and autoscaling policies.
Build, share, and reuse custom connectors for your business leveraging the Connector Development Kit Connectors play a key role in integrating applications, building APIs and acting on events. Without connectors, users lose the versatility to quickly connect diverse types of systems, the standardization to ensure reliable consistency, the ability to scale integrations on demand, and the time spent on increased maintenance and governance. In this session, we will deep dive on how to build your own custom connectors for easy reuse across your business. 
Managing event endpoints Event Endpoint Management lets you describe socialise and manage your Apache Kafka topics just like you manage APIs. 

This lab walks through sharing and then consuming your first event endpoint.  We will start by exploring the sharing experience by setting up an event gateway, describing existing topics, and publishing them to a searchable catalog.  We will also go through the consumption experience, finding, exploring the Async API definition and generating self-serve credentials to start using these events.
Bridging MQ and Kafka IBM MQ and IBM Event Streams are a great complement to each other. Event Streams provides event distribution and streaming, and MQ messages are a valuable source of real-time events, representing the transactions, changes, and interactions that are occurring in the business.

Flowing MQ messages into Event Streams is a simple way to let them in additional ways. Emit notifications about events in real-time, enabling new and responsive applications. Perform real-time analytics and analysis on events as they're emitted, or auditing on a historical event log of previous messages.

In this Lab, you set up a fast and reliable connection between MQ and Kafka without disrupting existing MQ apps or queues, and configure this connection to transform and reformat the messages in different ways.
Event Processing made easy IBM Event Automation introduces an exciting new capability for unlocking the value of events: Event Processing.

In this Lab, you will get hands-on with this new technology. We will set up a variety of Kafka topics with streams of events flowing to them, and show you how to use the low-code authoring canvas to define event processing flows.

You’ll run your event processing flows and see the results directly in the low-code canvas. Then you’ll export the generated SQL and try running that yourself as a production job in Apache Flink.

The Lab will show you how easy it is to start processing events on your Kafka topics.
Zero to 100 - All You Need to Know -Getting Started with IBM EventStreams  Are you new to Kafka? Still trying to figure out the basics of topics, schema etc.? This is a good chance to polish your knowledge with all the basic components of EventStreams – The Event Distribution Pillar of IBM Event Automation. You will learn about Kafka, replication with Geo-Rep / MirrorMaker 2, Schema Registry, Connector Framework and Integration with Instana. 
Create your own web application using Aspera NodeAPI It is possible to connect and trigger Aspera transfer using Aspera NodeAPIs. This session will teach you how to build your own web application using NodeAPI and trigger transfers from a browser. 
Horizontally scaling IBM MQ and your applications In this session we will cover the different types of Availability - message availability and service availability. In this lab you will use IBM MQ Native HA and Uniform Clusters to create a highly resilient, always-on IBM MQ solution.
IBM Cloud Pak for Integration in Action  - Turbocharge  your Application  Integration Development  Experience first hand  an app  developer's view of Cloud Pak for Integration . You will  work with a single application and implement the following using the low code/no code capabilities of the IBM Cloud Pak for Integration components.

- Create, deploy and test a new, external API using the IBM API Connect Developer Toolkit
- Bidirectionally sync Salesforce data with the  application using IBM App Connect
- Implement near realtime transactional  data replication from the application to reporting databases using IBM Event Streams (IBM's Kafka offering) 
Developing and testing an IBM App Connect application  This session is designed for integration experts and business technologists who are curious about how IBM App Connect can be used by businesses building API-driven and event-driven integration architectures.  IBM App Connect includes artificial intelligence (AI) and other automation features to speed time to value and reduce risk of longer project timelines.

The hands-on lab portion of the session will have you create a simple message flow application and use the IBM App Connect Flow Exerciser to test it. The message flow uses HTTP nodes and acts as a simple web service.  Finally, you use the IBM App Connect web user interface to check the status of the integration server and message flow application.
Unleash Your Data's Potential: Master Data Quality and Discovery with Cloud Pak for Data Data has become a crucial asset for modern businesses, however, the sheer volume and complexity of data can make it difficult to manage, organize, and maintain its quality. Data quality is essential for making accurate decisions and driving business success. Poor quality data can lead to incorrect insights, which can result in costly mistakes and lost opportunities.

In this hands-on lab, participants will learn how to leverage CP4D to improve data quality and accelerate data discovery, how to enrich metadata and tag data assets, and how to run data quality checks, identify data anomalies, and cleanse data.

By attending this lab, participants will have the know how to transform their organization into a data-driven powerhouse and gain a competitive edge in the market. 
0 comments
83 views

Permalink