ODI11g in the Enterprise Part 1: Beyond Data Warehouse Table Loading
Most developers who've used Oracle Data Integrator in the past have used it to load data warehouse tables, typically from an Oracle or other relational source into a set of dimensional Oracle target tables. Some source data might come from flat files or XML files, and in some cases the source databases might be SQL Server, IBM DB2, Sybase or mySQL. Most projects of this type load data in batch, in some cases once or twice a day or more recently, in near real-time using micro-batches or push technology such as JMS Queues. Compared to Oracle Warehouse Builder, ODI 11g is a fairly easy-to-understand, extensible tool with a clear product roadmap, few hidden surprises and a solid set of features for manipulating relational sources and targets.
Over past few years though, ODI has been extended in various ways to support loading and extracting from applications such as Hyperion Planning and OLAP servers such as Oracle Essbase, through to more recent innovations such as loaders for Hadoop and Oracle R Enterprise. ODI can play a full part in service-orientated architectures providing bulk data-movement functionality to complement SOA messaging, and through various APIs, SDKs and scripting languages, it can take part in DevOps-style software development methods to support techniques such as continuous integration, build automation agile development.
In addition, as ODI has moved away from pure data warehouse ETL-style development through to being a key part of a wider Fusion Middleware deployment, the code it produces becomes mission critical and runs 24 hours-a-day, providing vital data movement and integration within the enterprise. New features such as JEE agents, load plans and OPMN provide resilience and fault tolerance for ODI data integration routines. but these are new features introduced since the Oracle acquisition and many developers may not be aware of their existence. ODI also is part of a suite of data integration tools including products such as Oracle Goldengate (for heterogenous data replication and changed data capture) and Oracle Enterprise Data Quality (the ex-Datanomic toolset used for profiling and cleansing enterprise datasets), as shown in this Oracle graphic from 2012's Oracle Openworld:
So, over the next few days I'll be looking at where ODI is now, and how new features introduced over the 11g timeline make it a first-class development tool within the wider Fusion Middleware toolset.
I'll add the links as I publish each of the articles, but over the next week here's the ODI 11g topics that I'll be covering:
- ODI11g in the Enterprise Part 1: Beyond Data Warehouse Table Loading
- ODI11g in the Enterprise Part 2 : Data Integration using Essbase, Messaging, and Big Data Sources and Targets
- ODI 11g in the Enterprise Part 3: Data Quality and Data Profiling using Oracle EDQ
- ODI 11g in the Enterprise Part 4: Build Automation and Devops using the ODI SDK, Groovy and ODI Tools
- ODI 11g in the Enterprise Part 5: ETL Resilience and High-Availability
I'm also planning on presenting a session on this topic at the upcoming BIWA Summit 2013 in San Francisco, January 2013, so if you've got any thoughts or observations that would be worth me incorporating into my final session, feel free to add them to the comments. For now though, check back tomorrow for the first installment, where we'll look at what other sources and targets ODI 11g can work with, with a particular look at how ODI11g's new "big data" features, integrating with technologies such as Hadoop, Hive and MapReduce, works.