Search This Blog

Sunday, 28 July 2013

OBIEE Server Cache

What BI Server does?

It parse the incoming SQL statements, store and retrieve query results in cache, and generate physical SQL and MDX statements retrieve the data from the various data sources.

Oracle BI Server acts as "analytical middleware" that provides the unified, enhanced data layer over various data sources and allows the user to query the data using logical dimensional model made up of business data items and hierarchies.

Out of the box, for BI server there is no additional configuration settings needed. Most of configuration setting is done in "Fusion Middleware Enterprise Manager" in OBIEE 11g.

Oracle long term goal/strategies is to expose all configuration settings via either of following any one

  1. Fusion middleware control's Web console or via

  2. Configuration JMX MBeans.

  3. Weblogic Scripting Tool (WLST).


But even though most of the configuration settings are done through above mentioned options. But in current release of OBIEE 11.1.1.6.0, there are some configuration settings still need to be set using configuration files, within OBI file system.

For Oracle BI Server, file that configures these setting is NQSConfig.INI file.

The file NQSConfig.INI contains some significant configuration settings:

  1. Fine-tuning configuration settings for caching.

  2. Sorting order and sorting locale for BI server

  3. Time and date display Format

  4. Whether GLobaly Uniquer Identifies (GUIDs) are updated when ta user logs into dashboard

  5. Usage tracking.


Note: From OBIEE 11.1.1.6.0 the usage tracking is controlled through a JMX MBeans attribute and should not be manually altered in the configuration file.

Location of NQSConfig.INI file

[middleware_home]\instances\instances1\config\OracleBIServerComponent\coreapplication_obis1\NQSConfig.INI

Additional BI Server Functions and Management Tasks:

  1. Configuring and managing the query cache

  2. Setting up usage tracking

  3. Creating aggregates through the Aggregate Persistence Wizard

  4. Enabling the Write Back.


Overview of BI Server Query Cache:

BI server query cache can improve performance of analyses that request commonly used data. When query cache enabled results generated by analysis are stored in cache and then available for use by other analysis requesting same data or data could be partially satisfied or derived from data stored in cache.

To determine the following points one has to do control settings of cache.

  1. To determine how big the cache can get

  2. How many entries it can store

  3. How hard the BI server should search cache to find results that can be useful to query

  4. To know the data in cache is out of date or stale data compared to source system data.


How Does Query Caching Work?

Oracle BI Server query cache has 3 main elements in its architecture.

  1. Cache Storage space

  2. Cache metadata

  3. Cache detection in query compilation


Using query cache it will be faster than querying the underlying data source because cache is stored locally and already contains result that are precomputed, prejoined and prefiltered.

The contents of query cache are stored in set of files on Oracle BI Server file system.

Note:

  1. We can move the BI Server's cache files to solid state disk (SSD). If have one, giving the potentially faster access to these files and ever faster queries.

  2. When scale-out or scale up Oracle BI installations, so that we can have more than one BI server components in Oracle BI instance, we can also enable GLOBAL CACHE that can be accessed by all BI Servers in cluster.




The Oracle BI Server Query cache follows a series of rules to maximize the chance of achieving CACHE HIT. Cache Services performs the following actions when processing a query sent to Oracle BI server.

  1. When oracle BI server receive a logical SQL query from end user analysis via ODBC client. It parses sql statement and if query caching is enabled. It checks the CHACHE METADATA to determine whether data it requires can be satisfied by query cache / derived from data held in query cache.

  2. If found the required data is retreieved from query cache and then passed to Final processing stage. Here other activities like sorting, ordering is done and passed to final end user/calling application.

  3. If the data not found in cache metadata, a query plan is generated, queries issued to data ware house/ data sources, retuned data is processed to form final data set. BI server again checks, if QUERY CHACHE enabled ? if yes, results are stored in cache.


What are the factors or rules followed by cache to decide cache hit?

  1. The select query all the columns either should be present in cached query or it can be derivable (compute using columns of query cache).

  2. The WHERE clause condition mush match with cached query or subset of cached query.

  3. Time series functions or external aggregation functions must exactly match in both terms of columns selected and filter used.

  4. Query must match with same number of (full set of) logical tables that cached query used. This is to avoid – If any LTS is missed, the missing/extra join condition will alter the number of rows returned.

  5. If Session variables used the values of those variable must match those of cached query

  6. The incoming cached query needs to have equivalent join conditions.

  7. Query must contain compatible aggregation levels, must either be same as each other or cache query must be aggregated at a level that can be rolled up to provide the correct result of incoming query.

  8. The ORDER BY columns used in incoming query needs to use the columns of SELECT query list of cached query.


When BI Server will decide it should not check CACHE?

  1. If query has columns related to CURRENT_TIMESTAMP, CURRENT_TIME, POPULATE etc.

  2. Query using the table, which is marked for non-cacheable in RPD , physical layer.

  3. The query result set would be too big to store in cache, as defined by DATA_STORAGE_PATHS and MAX_ROWS_PER_CACHE_ENTRY.

  4. The query cancelled by user or by time-out.


WHEN SHOULD USE QUERY CACHE ?

  1. Whenever in data Ware house the data loaded infrequently. The data load process can include will purge the query cache, so that all new chache hits take place

  2. Where users typically request same or similar data. So it will maximizing the cache hits.


WHAT ARE OTHER CACHES IN ORACLE BI ?

  1. The Oracle BI Presentation Services cache, which stores the result of queries and used them. Ex: To avoid delay/round trips to BI server when end user switch between the screens on dashboard.

  2. The WEB BROWSER CACHE, which stores images and files locally to avoid round trips to web server used by OBI.


Note: Presentation Services cache can be configured through settings in     INSTANCECONFIG.XML file.

How to achieve this ?

By changing Presentation Server cache settings. Add and entry within the

<ServerInstance></ServerInstance> tags in file. The parameters are listed below.

  1. MaxEntries

  2. MaxExpireMinutes

  3. MinExpireMinutes

  4. MinUserExpireMinutes


Once added, restart all Presentations Server components to register the new settings.

Benefits & Drawbacks of using QUERY CACHING ?

Benefits:

  1. Improve performance by analysis, dashboards. Its check the data in local stored files rather hitting databases to retrieve the results.

  2. Reduce network traffic to access the data locally.

  3. Reduce the work of source databases, by avoiding need to retrieve same data again n again


  4. Reduced the work of BI server, otherwise it has to recombine the data set from retrieved data.


Drawbacks

  1. One has to purge the cache every time new ETL is ran. Otherwise stale data in cache.

  2. Retrieving the data from cache will cost additional CPU and I/O load on server.

  3. If the caches takes more space in file system, additional space needed like [SSD or RAM]

  4. Queries to seed the cache may have additional task in ETL load on source db. It may require extra time after the ETL process to perform the seeding.


MANAGING QUERY CACHE:

  • Enabling, disabling and tuning the running query cache.

  • Purging cache to remove the stale data.

  • Seed Cache to "preload" with frequently accessed data.


Enabling, Disabling Cache:

  1. Open Fusion Middleware Control,

  2. Select "Business Intelligence" on left hand side, -> Capacity management tab -> Performance tab.

  3. Enable the cache.

  4. Activate changes, Restart the BI components.


Setting Cache Parameters in NQSConfig.INI

  1. DATA_STORAGE_PATHS: Defines the directories that will be sued to stre the cache files and maximum size that they can reach.


  2. MAX_ROWS_PER_CACHE_ENTRY: This setting determines, per query, the maximum number of rows that query can have to allowed into CACHE.

    Default value: 100000.


  3. POPULATE_AGGREGATE_ROLLUP_HITS: Default as NO.

    The queries satisfied from cache and then aggregate that data are not stored in cache. If this parameter is set to YES, queries roll up data from cache will stored in ached after the query completes.

  4. USE_ADVANCED_HIT_DETECTION: By default is set to NO. When you know the results are stored in cache but that cache retrieval process does not use them. If this value set to YES tells the BI server will look into cache, at the expense of CPU overhead.

  5. MAX_SUBEXPR_SEARCH_DEPTH:

  6. DISABLE_SUBREQUEST_CACHING


There parameters are edited to take effect these changes, Restart BI Server components.

PURGING QUERY CACHE:

  • Configuring Cache Persistence Times for Individual Physical objects.

  • Purging the cache Using Oracle BI Administrator Tool.

  • Programmatically run the ODBC command to purge cache entries at the end of DW load process.

  • Purging the cache Using Event Polling Tables.


Configuring Cache Persistence Times for Individual Physical objects:

  1. You can set at individual physical object, cache persistence times for each object. The entries in cache for this object will be purged according to timetable.

  2. This type of cache mgmt for more frequently updated source systems.



Purging the cache Using Oracle BI Administrator Tool.

The cache can be managed by opening the Administration Tool and connecting online to the OBIEE server.  Go to Manage -> Cache:



This contains some very interesting information about each entry such as when it was created, who created it, what logical SQL it contains, how many rows are in the entry, how many times the entry has been used since it was created etc.

To purge cache entries select them in the right hand window and right click, choose purge from the context menu:


Programmatically purging the cache


Oracle provides a number of ODBC calls which you can issue to the OBIEE server over the Oracle provided ODBC driver to purge the cache these are:

SAPurgeCacheByQuery(x) - This will purge the cache of an entry that meets the logical SQL passed to the method.

SAPurgeCacheByTable(x,y,z,m) - This will purge the cache for all entries for the specified physical table where:

x = Database name
y = Catalog name
z = Schema name
m = Table name

SAPurgeAllCache() - This will purge all entries from the query cache.

These ODBC calls can be called by creating a .sql file with the call in it, i.e.:

Call SAPurgeAllCache();

Then create a script to call it, on Windows you can create a .cmd file with the contents:

nqcmd -d "AnalyticsWeb" -u Administrator -p [admin password] -s [path to sql file].sql -o [output log file name]

This will use nqcmd to open an ODBC connection to the ODBC AnalyticsWeb and then run your .sql file which will purge the cache.

I have used this many times as the only step in a custom Informatica WF which is then added as a task in the DAC and set as a following task for an Execution Plan.  This way after the ETL the cache will be purged automatically and there is no risk of stale cache entries.

Purging the cache Using Event Polling Tables


Another method of purging the cache programmatically by event polling tables.

By event polling is way to notify the BI server that one or more physical tables have been updated. The even table is a physical table that can resides in relational database accessible to oracle BI server.

The event table is updated by SQL INSERT statements of data load or load scripts or programs that populate the databases. The INSERT statements add one row each whenever physical table modified.

To create an event polling table, run the Repository Creation Utility (RCU) to create the Business Intelligence Platform (BIPLATFORM) schemas in your physical database. RCU creates an event polling table called S_NQ_EPT.

create table S_NQ_EPT (
Update_Type    NUMBER not null,
Update_TS        DEFAULT SYSDATE not null,
Database_Name    VARCHAR2 null,
Catalog_Name    VARCHAR2 null,
Schema_Name    VARCHAR2 null,
Table_Name        VARCHAR2 not null,
Other_Reserved    VARCHAR2 NULL
);

The Oracle BI Server must have read and write permission on the event polling table. The server reads the event table at specified intervals to look for changed data. Applications add rows to the event table when database tables are modified (for example, during a load operation). When there are rows in the event table, there is changed data in the underlying databases. The server then invalidates any cache entries that correspond to the changed physical tables and periodically deletes obsolete rows from the event table. The next time it checks the event table, the process repeats.

Event tables do not need to be exposed in the Business Model and Mapping layer.

  1. Open Admin Tool and open repository in online or offline.

  2. Select File – Import – Metadata – enter connection etails to database that contains cache polling table.

  3. Import the table S_NQ_EPT tablw with BIPLATFORM schema.

  4. To register table as repositorys cache polling table, select – Tools – Utilities – Oracle BI Event Tables, click Execute buttons.

  5. Define the polling frequency. [stale cache entries are purged automatically at the specified polling intervals.]



Test





  • Create a report with a product attribute to seed the cache.


  • Insert a row to give to OBIEE the instruction to delete the cache from all SQL query cache which contain the product table.


INSERT


INTO


S_NQ_EPT



(


update_type,


update_ts,


database_name,


catalog_name,


schema_name,


table_name,


other_reserved



)



VALUES



(



1,


sysdate,



'orcl SH',



NULL,



'SH',



'PRODUCTS',



NULL



)





  • Wait the polling interval frequency and verify that the cache entry is deleted and that you can find the below trace in the NQQuery.log file.


+++Administrator:fffe0000:fffe001c:----2010/06/16 00:45:33


-------------------- Sending query to database named ORCL (id: <<13628>>):


select T4660.UPDATE_TYPE as c1,


T4660.UPDATE_TS as c2,


T4660.DATABASE_NAME as c3,


T4660.CATALOG_NAME as c4,


T4660.SCHEMA_NAME as c5,


T4660.TABLE_NAME as c6


from


S_NQ_EPT T4660


where ( T4660.OTHER_RESERVED in ('') or T4660.OTHER_RESERVED is null )


minus


select T4660.UPDATE_TYPE as c1,


T4660.UPDATE_TS as c2,


T4660.DATABASE_NAME as c3,


T4660.CATALOG_NAME as c4,


T4660.SCHEMA_NAME as c5,


T4660.TABLE_NAME as c6


from


S_NQ_EPT T4660


where ( T4660.OTHER_RESERVED = 'oracle10g' )


+++Administrator:fffe0000:fffe001c:----2010/06/16 00:45:33


-------------------- Sending query to database named ORCL (id: <<13671>>):


insert into


S_NQ_EPT("UPDATE_TYPE", "UPDATE_TS", "DATABASE_NAME", "CATALOG_NAME", "SCHEMA_NAME", "TABLE_NAME", "OTHER_RESERVED")


values (1, TIMESTAMP '2010-06-16 00:44:53', 'orcl SH', '', 'SH', 'PRODUCTS', 'oracle10g')


+++Administrator:fffe0000:fffe0003:----2010/06/16 00:45:33


-------------------- Cache Purge of query:


SET VARIABLE QUERY_SRC_CD='Report',SAW_SRC_PATH='/users/administrator/cache/answers_with_product';


SELECT Calendar."Calendar Year" saw_0, Products."Prod Category" saw_1, "Sales Facts"."Amount Sold"


saw_2 FROM SH ORDER BY saw_0, saw_1


+++Administrator:fffe0000:fffe001c:----2010/06/16 00:45:33


-------------------- Sending query to database named ORCL (id: <<13672>>):


select T4660.UPDATE_TIME as c1


from


S_NQ_EPT T4660


where ( T4660.OTHER_RESERVED = 'oracle10g' )


group by T4660.UPDATE_TS


having count(T4660.UPDATE_TS) = 1


+++Administrator:fffe0000:fffe001c:----2010/06/16 00:45:33


-------------------- Sending query to database named ORCL (id: <<13716>>):


delete from


S_NQ_EPT where S_NQ_EPT.UPDATE_TS = TIMESTAMP '2010-06-16 00:44:53'

Currency Conversion in Oracle BI applications

A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc.

With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business.

This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users.

Source Systems

OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users.

For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly.

OBIA Design

The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance.

When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies.

OBIA Facts store amounts in various currencies:

Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies.

Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company.

Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation.

The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics.

When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard.

Design Logic

When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts.

If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates.

The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA.

The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system's Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion.

Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters.

Some Gotchas to Look for

It is necessary to think through the currencies during the initial implementation.

1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time.

2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done.

Technical Section

This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system.

In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables.

W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let's voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly.

EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process.

  1. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G.
    1. Delete the records which have Start Date > minimum conversion date
    2. Update the End Date of the existing records.
  2. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter.
    1. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP.
    2. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range.
  3. Mark the records as 'Keep' and 'Filter' and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date).
  4. Filter the records marked as 'Filter' in the INFA map.
  5. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly.
  6. SIL map will then insert/update the GS data into W_EXCH_RATE_G.

These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G.

We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this.

Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly.

PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let's look at an example. Please note that this is achieved from PS1 onwards only.

1 Jan 2010 – USD to INR – 45

31 Jan 2010 – USD to INR – 46

PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW.

Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate.

We generate From Date and To Date while extracting data from source and this has certain assumptions:

If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected.

Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type.

Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards.

As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked.

Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA.

W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well.

What if all 5 Global Currencies configured are same?

As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as 'USD' then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record.

We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases.

Reusable Lookups

Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned.

Changing Currency preferences in the Dashboard:

In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution.

List of Currency Preferences

Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml

This file contains the list of currency preferences, like"Local Currency", "Global Currency 1",…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: "Dynamic Currency Preference" to run this query, and give every user only the list of currency preferences required by his application roles.

Adding Currency to an Amount Field

When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects "Global Currency1" from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select "Local Currency", all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error.

There are two ways to add the Currency field to an amount metric:

  1. In the form of currency code, like USD, EUR…For this the user needs to add the field "Apps Common Currency Code" to the report. This field is in every subject area, usually under the table "Currency Tag" or "Currency Code"…
  2. In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the "BI Common Currency Code" available in every subject area.
    1. Select Column Properties option in the Edit list of a metric.
    2. In the Data Format tab, select Custom as Treat Number As.
    3. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column]Where Column is the "BI Common Currency Code" defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

Aggregate Persistance Wizard

In this topic, you understand the role and function of the Aggregate Persistence Wizard in creating and modeling aggregates to resolve query bottlenecks.


Aggregate tables store precomputed results, which are measures that have been aggregated (typically summed) over a set of dimensional attributes. Using aggregate tables is a very popular technique for speeding up query response times in decision support systems. This eliminates the need for run-time calculations and delivers faster results to users. The calculations are done ahead of time and the results are stored in tables. Aggregate tables should have fewer rows than the nonaggregate tables and, therefore, processing should be quicker.


The aggregate navigation capability of Oracle BI Server allows queries to use the information stored in aggregate tables automatically, without query authors or tools having to specify aggregate tables in the queries. Oracle BI Server allows users to concentrate on asking the right business questions, because the server decides which tables provide the fastest answers. For Oracle BI Server to have enough information to navigate to aggregate tables, certain metadata in the repository must be correctly configured.


The traditional process of creating aggregates for Oracle BI Server queries is manual. It can be tedious, requiring complicated data definition language (DDL) and data manipulation language (DML) scripts to be written for creating tables in the databases involved. Additionally, these aggregated tables need to be mapped to the repository metadata to be available for queries. This is a time-consuming and, possibly, error-prone process.


The Aggregate Persistence Wizard enables you to automate the creation of physical aggregate tables and their corresponding objects in the repository. The Aggregate Persistence Wizard creates an Oracle BI Enterprise Edition SQL script, which is executed by the BI Server. The script specifies each aggregate table to be created, the facts from the business model that should be included in it, and its dimensions and grain. When the BI Server runs the aggregate persistence SQL script, it generates DDL to create the required tables in the target database, internal instructions to generate the corresponding physical and aggregate navigation metadata, and data manipulation language (DML) to aggregate and load data from the base tables into the aggregate tables. The aggregate persistence script is intended to be run after each extraction, transformation, and loading (ETL) of the base tables, typically nightly. This can be done by an Oracle BI EE Job Manager job, or it can be run as a .bat or any other script called by a custom program.


The default prefix SA_ is automatically added to dimension (level) aggregates. You can change this default prefix by updating the AGGREGATE_PREFIX parameter in the AGGREGATE_PERSISTENCE section of the NQSConfig.INI file: AGGREGATE_PREFIX = "prefix_name" ;




Using the Aggregate Persistence Wizard


Select Tools>Utilities from the menu bar.




From the Utilities windows select Aggregate Persistence and click Execute. The Aggregate Persistence wizard opens.




In the Aggregate Persistence - Select File Location window enter the name of the SQL script file as aggregate.sql. Enter the folder where this file will be placed. Click Next.




In the Aggregate Persistence - Select Business Measures window, select the subject area 01 - Sample App Exa and following measures from the F0 Sales Base Measures table:


1-Revenue


2-Billed Quantity


3-Discount Amount


4-Paid Amount


Click Next.




In the Aggregate Persistence - Select Levels window, select dimension H0 Time.Month, H1 Products.Product Brand, and H3 Offices.Company. Click Next.




In the Aggregate Persistence - Select Connection Pool window, select the database as 02- Sample App Exa Data (ORCL). Make sure you select the Oracle database (ORCL) and not Essbase (ESSB). Select Catalog/Schema as 02 - Sample App Exa Data(ORCL) > BISAMPLE_EXA. Select the Connect Pool Exa Sample Connection. Change the default aggregate table name to ag_BISAMPLE_EXA. Click Next.




In the Aggregate Persistence - Finish window, check the script. Select I am done and click Next.




In the Aggregate Persistence - Finish Script window, check that the script is created and placed in the specified folder. Click Finish.




Open the aggragate.sql script file in edit mode. Check the script.






Running nqcmd in Command Line


Open a command window. Change directory to c:\bi\instances\intance1\bifoundation\OracleBIApplication\coreapplications\setup. Your environment may have a different path. Enter and run bi-init in the command window.


A second window opens. At the prompt in the second command window, enter:


nqcmd -d coreapplication_OH1291352497 -u weblogic -s c:\temp\aggregates.sql


d - Is the ODBC data source name for the Oracle BI Server to which you want to connect.


u - Username


s - SQL script file you created using the Aggregate Persistence wizard. For this tutorial it is c:\temp\aggregate.sql




Enter welcome1 as the password and press Enter.




Wait till you get Statement execute succeeded message.




You completed running the nqcmd command for creating aggregates with Aggregate Persistence Wizard..


Note: If the statement execution failed, make sure you do not have any summary tables in the database, Business Model layer, and in the Physical layer for the dimensions you are using, prior to running the nqcmd command. You can use the Delete aggregates command to delete them.

Usage Tracking in OBIEE 11G

Why?


  • To monitor system and ad hoc query performance and to provide usage details.

  • Which report is queried? Who queried? How many times the report/dashboard or ad hoc analysis is performed by users. Who logged in?


  • Daily and weekly peak usage patterns and how much load variance on OBIEE.

    There are three main components:



  • Changes/configuration in the RPD

  • Configuration in Enterprise Manager


  • SQL to run to setup Usage Tracking schema.


How to ENABLE?


There are two methods of enabling Usage tracking.

  • Direct Insertion - Oracle BI Server inserts statistics for every query are inserted into a database table.

  • Log File - Oracle BI Server inserts statistics for every query into log file


It is recommended that you use direct insertion to write statistics to a database table.

The sample RPD can be found at:

{OBIEE_11G_Instance}/bifoundation/OracleBIServerComponent/coreapplication_obis1/sample/usagetracking.

Note: By default Usage Tracking is recorded in the xxx_BIPLATFORM schema that gets created by the Repository Creation Utility[RCU] at the start of the OBIEE installation. You can create your own table to hold usage tracking data within ORACLE or other database. You should generally use th predefined table to hold this data.

Scripts are available from Directory:

{OBIEE_11G_Instance}\bifoundation\OracleBIServerComponent\coreapplication_obis1\sample\usagetracking\SQL_Server_Time

Create the following sql scripts:

  • Oracle_create_nQ_Calendar—Create table scripts

  • Oracle_create_nQ_Clock—Create Table Scripts

  • Oracle_nQ_Calendar—Insert Scripts

  • Oracle_nQ_Clock—Insert Scripts

  • Also create S_NQ_ACCT table

  • We need to even create NQ_LOGIN_GROUP view also using the following syntax:


Scripts are available from Directory:

{OBIEE_11G_Instance}\bifoundation\OracleBIServerComponent\coreapplication_obis1\schema

Compile - SAACCT.Oracle

- SQLServer_create_nQ_UserGroup - Use This Script for Oracle Database

CREATE OR REPLACE VIEW nq_login_group AS SELECT DISTINCT user_name AS login, user_name AS resp FROM s_nq_acct;


Grant select on nq_login_group to public;

Commit;

These script create the following tables

  • S_ETL_DAY

  • S_ETL_TIME_DAY

  • S_NQ_ACCT




We now need to add extra fields to the S_NQ_ACCT table in the physical layer. This is because Oracle have added extra fields to the actual table and altered the usage tracking functionality in OBIEE to use them. If they are not in the RPD, OBIEE will fail to update the table with usage details.



  • Expand the S_NQ_ACCT table in the physical layer and check that all of the following fields exist with the correct data type.

















  • Rename RUNAS_USER_NAME to IMPERSONATOR_USER_NAME


  • Add the following missing fields (right click on S_NQ_ACCT, select New Object -> Physical Column…)



    • ID


      • Type: VARCHAR

      • Length: 50

      • Nullable: Unchecked




    • QUERY_BLOB


      • Type: LONGVARCHAR

      • Length: 4000

      • Nullable: Checked




    • QUERY_KEY


      • Type: VARCHAR

      • Length: 128

      • Nullable: Checked





  • Below is a list of all of the required fields, data type, length, and nullable flag:






































































































































































































Field NameData TypeLengthNullable?
CACHE_IND_FLGCHAR1No
COMPILE_TIME_SECDOUBLEYes
CUM_DB_TIME_SECDOUBLEYes
CUM_NUM_DB_ROWDOUBLEYes
END_DTDATETIMEYes
END_HOUR_MINCHAR5Yes
END_TSDATETIMEYes
ERROR_TEXTVARCHAR250Yes
IDVARCHAR50No
IMPERSONATOR_USER_NAMEVARCHAR128Yes
NODE_IDVARCHAR15Yes
NUM_CACHE_HITSDOUBLEYes
NUM_CACHE_INSERTEDDOUBLEYes
NUM_DB_QUERYDOUBLEYes
PRESENTATION_NAMEVARCHAR128Yes
QUERY_BLOBLONGVARCHAR4000Yes
QUERY_KEYVARCHAR128Yes
QUERY_SRC_CDVARCHAR30Yes
QUERY_TEXTVARCHAR1024Yes
REPOSITORY_NAMEVARCHAR128Yes
ROW_COUNTDOUBLEYes
SAW_DASHBOARDVARCHAR150Yes
SAW_DASHBOARD_PGVARCHAR150Yes
SAW_SRC_PATHVARCHAR250Yes
START_DTDATETIMEYes
START_HOUR_MINCHAR5Yes
START_TSDATETIMEYes
SUBJECT_AREA_NAMEVARCHAR128Yes
SUCCESS_FLGDOUBLEYes
TOTAL_TIME_SECDOUBLEYes
USER_NAMEVARCHAR128Yes



  • The join between S_NQ_ACCT and NQ_LOGIN_GROUP is broken. Select both tables, right click and choose Phyiscal Diagram -> Selcted Object(s) Only.


  • Select the red arrow, and click Delete to remove the broken join.




  • Now click on the new join icon




  • Draw a new join from S_NQ_ACCT to NQ_LOGIN_GROUP and change the join condition to:

    "ABC Usage Tracking"."Catalog"."dbo"."NQ_LOGIN_GROUP"."LOGIN" = "OBI Usage Tracking"."Catalog"."dbo"."S_NQ_ACCT"."USER_NAME"


     Now right click on the table S_NQ_ACCT and select properties.


     Select the Keys tab.

     If there is no key defined, enter ID in Key name and select the ID field from the drop     down in the Columns field.

     Click Ok.

     Don't forget to setup proper Connection Pool information for both connection pools:

  • Usage Tracking Connection Pool

  • Usage Tracking Writer Connection Pool.


4. Ammend NQSCONFIG.ini ( File Location has changed from 10g to Below)

C:\Middleware\instances\instance1\config\OracleBIServerComponent\coreapplication_obis1\

nqsconfig.INI



5. Set the Physical Table Parameter.

  • Set the 'PHYSICAL_TABLE_NAME' parameter to specify the table into which the insert the records corresponding to query statistics

  • The physical Table name is fully qualified name in Physical layer of Admin Tool.




The general structure of this parameter depends on the type of database being used. For Oracle, the structure is

PHYSICAL_TABLE_NAME = "<Database>"."<Schema>"."<Table>".


Set the CONNECTION_POOL parameter to specify the connection pool to use for inserting records into the usage tracking table. The setting is CONNECTION_POOL = "ABC Usage Tracking"."Usage Tracking Connection Pool". Leave the default settings for BUFFER_SIZE, BUFFER_TIME_LIMIT_SECONDS, NUM_INSERT_THREADS, and MAX_INSERTS_PER_TRANSACTION. Your configuration file should look like this:





The connection pool name is the fully qualified name as it appears in the Physical layer of the repository.

Save and close the NQSConfig.ini configuration file.

Creating the Business Model Layer for Usage Tracking

Right-click in the Business Model and Mapping layer white space and select New Business Model.



Name the business model 13-Usage Tracking 2, and then click OK. Note: there is a similar business model name 10 - Usage Tracking. This is set up by Sample Applications.



Right-click 13-Usage Tracking 2 and select New Object>Logical Table.



Name the table Measures and click OK.



Repeat and add three more tables - Time, Topic, and User to the 13-Usage Tracking 2 business model.



Drag the following three physical columns from Usage Tracking Schema>S_N_ACCT to the Measures logical table in the 13- Usage Tracking 2 business model. For each column, right-click the logical column and select Rename, and then rename as follows:























Physical ColumnRename
QUERY_TEXTQuery Count
ROW_COUNTRow Count
TOTAL_TIME_SECTotal Time Seconds


Right-click the column Query Count and select Properties.



Click the Aggregation tab and select Count from the Default aggregation rule drop-down list. Click OK.



Similarly, set the Aggregation rule for other logical columns in the Measures logical table as follows:



















Logical ColumnAggregation Rule
Row CountSum
Total Time SecondsSum


Drag the following three physical columns from Usage Tracking Schema>S_NA_ACCT to the Time logical table in the 13-Usage Tracking 2 business model. Rename them as follows:























Physical ColumnRename
START_DTStart Date
START_HOURS_MINStart Hour Minute
END_HOUR_MINEnd Hour Minute


The Time logical table should look like this:



Set Start Date as the logical key for the Time logical table. Right-click the Time logical table and select Properties.



Click the Keys tab and enter Time_key in the Key Name column.



Select Start Date from the Columns drop-down list. Click OK.



Drag the following two physical columns from Usage Tracking Schema>S_NQ_ACCT to the Topic logical table in the 13-Usage Tracking 2 business model. Rename them as follows:



















Physical ColumnRename
QUERY_TEXTLogical SQL
SUBJECT_ AREA_NAMESubject Area


Set Logical SQL as the logical key for the Topic logical table.



Drag the USER_NAME physical column from Usage Tracking Schema>S_NQ_ACCT to the User logical table in the 13-Usage Tracking 2 business mode. Rename it to User Name.



Set User Name as the logical key for the User logical table.



Use the Business Model Diagram to create logical joins from Measures to Time. Right-click the 13-Usage Tracking 2 business model and select Business Model Diagram>Whole Diagram.



Click the New Join icon from the menu and create logical joins from Measure to Time, Topic, and User as shown below.



You have now created the Usage Tracking business model. You will next move it to the Presentation layer.


Configuring Usage Tracking in Enterprise Manager

You now configure usage tracking through Oracle Enterprise Manager (EM) by performing the following steps:


Log into EM. In this example, the user name is weblogic and the password is welcome1.



Select WebLogic Domain>bifoundation_domain from the left navigation pane.



Click the WebLogic Domain drop-down list in the right pane.



Select System MBean Browser from the list.



Expand Application Defined MBeans>oracle.biee.admin.



Expand bifoundation_domain.



Lock the domain to update it. Expand BIDomain and select the BIDomain MBean where group=Service. Note: Expand Show MBean Information in the right pane.



Click the Operation tab.



Click the Lock link. On the Operation:lock page, click the Invoke button.



Click the Return button.



Expand BIDomain.BIInstance.ServerConfiguration, and then select the BIDomain.BIInstance.ServerConfiguration MBean.



Click the Attributes tab. Scroll down and click UsageTrackingCentrallyManaged. Set Value to true in the drop-down list.



Click the Apply button. When you see the confirmation message for the update, click the Return button.



Similarly, set the UsageTrackingEnabled attribute to true to enable usage tracking.



Set the UsageTrackingDirectInsert attribute to true to enable direct insertion.



Set the UsageTrackingPhysicalTableName attribute to the name of the fully qualified database table for collecting query statistic information, as it appears in the Physical layer of the Oracle BI repository. In this example, it is set as follows:


"12-Usage Tracking"."Usage Tracking Schema"."S_NQ_ACCT"



Set the UsageTrackingConnectionPool attribute to the name of the fully qualified connection pool for the query statistics database, as it appears in the Physical layer of the Oracle BI repository. In this example, it is set as follows:


"12-Usage Tracking"."UsageTracking Connection Pool"



Note: For usage tracking insertions to succeed, the connection pool must be configured with a user ID that has write access to the back-end database. Also, it is recommended that the connectivity type support international data.


After applying your changes, release the lock on the domain, as follows:




  • Return to the BIDomain MBean where group=Service under oracle.biee.admin, Domain:bifoundation_domain, BIDomain.

  • Click the Operations tab.

  • Click one of the first commit operations.



Go to the Oracle Business Intelligence Instance page and click Restart All on the Availability>Processes tab.