Pentaho Data Integration Cookbook Second Edition
Alex Meadows, María Carina Roldán
Format: PDF / Kindle (mobi) / ePub
The premier open source ETL tool is at your command with this recipe-packed cookbook. Learn to use data sources in Kettle, avoid pitfalls, and dig out the advanced features of Pentaho Data Integration the easy way.
- Intergrate Kettle in integration with other components of the Pentaho Business Intelligence Suite, to build and publish Mondrian schemas,create reports, and populatedashboards
- This book contains an organized sequence of recipes packed with screenshots, tables, and tips so you can complete the tasks as efficiently as possible
- Manipulate your data by exploring, transforming, validating, integrating, and performing data analysis
Pentaho Data Integration is the premier open source ETL tool, providing easy, fast, and effective ways to move and transform data. While PDI is relatively easy to pick up, it can take time to learn the best practices so you can design your transformations to process data faster and more efficiently. If you are looking for clear and practical recipes that will advance your skills in Kettle, then this is the book for you.
Pentaho Data Integration Cookbook Second Edition guides you through the features of explains the Kettle features in detail and provides easy to follow recipes on file management and databases that can throw a curve ball to even the most experienced developers.
Pentaho Data Integration Cookbook Second Edition provides updates to the material covered in the first edition as well as new recipes that show you how to use some of the key features of PDI that have been released since the publication of the first edition. You will learn how to work with various data sources – from relational and NoSQL databases, flat files, XML files, and more. The book will also cover best practices that you can take advantage of immediately within your own solutions, like building reusable code, data quality, and plugins that can add even more functionality.
Pentaho Data Integration Cookbook Second Edition will provide you with the recipes that cover the common pitfalls that even seasoned developers can find themselves facing. You will also learn how to use various data sources in Kettle as well as advanced features.
What you will learn from this book
- Configure Kettle to connect to relational and NoSQL databases and web applications like SalesForce, explore them, and perform CRUD operations
- Utilize plugins to get even more functionality into your Kettle jobs
- Embed Java code in your transformations to gain performance and flexibility
- Execute and reuse transformations and jobs in different ways
- Integrate Kettle with Pentaho Reporting, Pentaho Dashboards, Community Data Access, and the Pentaho BI Platform
- Interface Kettle with cloud-based applications
- Learn how to control and manipulate data flows
- Utilize Kettle to create datasets for analytics
Pentaho Data Integration Cookbook Second Edition is written in a cookbook format, presenting examples in the style of recipes.This allows you to go directly to your topic of interest, or follow topics throughout a chapter to gain a thorough in-depth knowledge.
Who this book is written for
Pentaho Data Integration Cookbook Second Edition is designed for developers who are familiar with the basics of Kettle but who wish to move up to the next level.It is also aimed at advanced users that want to learn how to use the new features of PDI as well as and best practices for working with Kettle.
construct that stores historical data and keeps versions of the data in the same table. Chapter 1 You can execute the SQL as it is generated, you can modify it before executing it (as you did in the recipe), or you can just ignore it. Sometimes the SQL generated includes dropping a column just because the column exists in the table but is not used in the transformation. In that case you shouldn't execute it. Read the generated statement carefully, before executing it. Finally, you must know
thinking of a MongoDB document is that they are akin to a multidimensional array. Like many NoSQL databases, the schema structure is dynamic. This means that the descriptors of a dataset can be added, removed, or not even required for records to be stored into a given document. Getting ready We will continue to use the Lahman's Baseball Database mentioned earlier in the chapter to load MongoDB and later use it to query for specific data. Before we can do anything else though, we need to make
an e-mail to firstname.lastname@example.org, and mention the book title via the subject of your message. If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide on www.packtpub.com/authors. Customer support Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase. Downloading the example code You can download the example code files for all Packt books you
recognized because they appear in bold. As an example, take a look at the following sample screenshot: The databases books and sampledata are shared; the others are not. The information about shared connections is saved in a file named shared.xml located in the Kettle home directory. 12 Chapter 1 No matter what Kettle storage method is used (repository or files), you can share connections. If you are working with the file method, namely ktr and kjb files, the information about shared
the metadata generated by those predefined statements can make your transformation crash. You can also use the same variable more than once in the same statement. This is an advantage of using variables as an alternative to question marks when you need to execute parameterized SELECT statements. Named parameters are another option to store parts of statements. They are part of the job or transformation and allow for default values and clear definitions for what the parameter is. To add or edit