Monday, December 28, 2009

Why do you see date/year or customer number or invoice number fields with values like 2016, 2032, 8022?

The reason is aggregate property was set incorrectly by the modeler. The modeler set the aggregate property to ‘Total’.

The aggregation for Attributes should be set as None, unsupported or maybe count.
Summarization should be performed only on Facts and not on attributes.
You as a report developer can set the aggregate property to ‘None’.



Monday, December 21, 2009

How to quickly display your Measures either as Values or percent of values?

IBM Cognos PowerPlay Studio allows you to display your measures as Values or percent of totals or percent of sub totals.


Steps to do this:

1. On the report Right-Click on the intersection of row and column labels

2. Choose either Values or percent of totals options.




Tuesday, December 15, 2009

How to display your reports on the Dashboard?

IBM Cognos Report studio allows you to display selected reports on your Dashboard.
Steps to do this:
1. Click on new Page Icon give it a name “My Daily Watch List”. Click Next
2.Select the Dashboard Layoutat the bottom click on “add”
3.Select Dashboard utilityand select "Multipage"
4.Click Next and enable ‘add the page to portal tabs’ and click Finish

5.Click on the edit button
6.Click on  and choose a folder or package with reports
7.See the output





Monday, December 14, 2009

How to use HTML tags to improve your reports?


HTML tags can deliver a lot of content without reducing performance of your reports.

For example: Use HTML tags to provide mouse over features on prompts, column labels etc.

















Monday, December 7, 2009

How to organize information in multiple fields into one column such as addresses or name-invoice numbers etc?

IBM Cognos Report studio allows you to organize related content into a single column.


For example in the image below we have organized the employee information into a single column.

Steps to do this:

1. Add the required field to the List Object
2. Click on the "LOCK" icon to unlock the column
3.Add the other related fields

Tuesday, December 1, 2009

How to quickly create professional looking report formats and styles?

IBM Cognos Report Studio provides various Table styles to meet the needs of financial, budgeting, accounting and other reports.


Wednesday, November 25, 2009

Microcharts - The new visualization technique! in IBM Cognos 8.4

Micro charts can be inserted into rows of a report to improve visualization.




Thursday, September 24, 2009

Free Online Training in IBM Cognos 8 PowerPlay


Free Online training in IBM Cognos 8 PowerPlay
Hello Everyone,
We are pleased to announce a FREE online training in IBM Cognos 8 PowerPlay.
IBM Cognos 8 BI PowerPlay Studio: Web Exploration (V8.4) is designed to to view, explore, format and distribute PowerPlay reports using PowerPlay Studio.
Register now!

Introduction to IBM Cognos 8
PowerPlay Studio

  • Examine PowerPlay Studio and its interface
  • Add and save data to ad hoc reports
  • Drill Up and Drill down your Analysis
  • Filter and sort your reports
  • Create Arithmetic and Percent calculations
  • Identify Top 10 performers
  • Highlight exceptional data
  • Create Custom subsets to meet unique business requirements
  • View data by using appropriate charts

Save up to 25% on your next IBM software purchase or IBM training classes. Please call Sandra at 404.315.1555, x 314 or email us
Please mark your calendars for October 16th 2009 between 10:00-11:00 AM EST. Please email support@softpath.net to confirm your seat and receive a copy of the training documentation.

Monday, September 21, 2009

Migration Step by Step tasks

This is a tentative migration road map. All durations and sub tasks are estimates, more duration is required for Planning. All assumption are based on WINDOWS machines.





Activity Sub task Duration Res
Sandbox migration
Install series 7 1 day 1
Instal Cognos8 1 day 1
Configure LDAP/Access Manager 1 day 1
Import Sample Catalogs and Reports 0.5 day 1
Import Impromtu reports 0.5 day 1
Import Sample Cubes and reports 0.5 day 1
Migrate Catalog, IMPCAT2XML 0.5 day 1
Copy Unsecured cubes 0.25 days 1
Copy Secured Cubes 0.25 days 1
Publish a cube as a package 0.25 days 1
Create data source connection for catalog and cubes 0.25 days 1
Test them to see if they work 1 day 1
Use Migrate to C8 to migrate impromptu and PP reports 1 day 1
Use Deploment Manager to create a dmp file of web content such as newsboxes 1 day 1
Manually deploy reports to CC 1 day 1
Test your reports/open with AS/RS 1 day 1
11 days 1
Actual Migration 1
Pre Clean the environment 1
Remove broken url's 0.5 days 1
Remove unwanted reports 1 day 1
Identify secured and unsecured cubes 1 day 1
Create administrator as the owner for all catalogs and cubes 1 day 1
Identify any image files 0.5 days 1
4 days 1
Organize your source 1
Create a folder for Catalogs 0.25 days 1
Create a folder for Cubes 0.25 days 1
Create a folder for Impromptu reports 0.25 days 1
Use a deployment manager to create dmp files for web content 0.5 days 1
Create a folder for PPX reports 0.25 days 1
1.5 days 1
step by step Migration 1
CATALOG Migration 1
Migrate Catalogs using Impcat2 XML 1 day
Publish FM packages with Datasource connection 1 day 1
Test your migration extensively 3 days 1
5 days 1
CUBE and MDL Migration 1
FTP your MDL and Cubes 1 day
Edit the COGNOS.INI file for datasource connection 0.5 day 1
OPEN MDL in newer version of Transformer and save 1 day 1
Publish cubes using Transformer/FM 2 days 1
Create data source connections for cubes 1 day 1
Test your Cubes 2 days 1
6.5 days 1
Migrate to C8 1
Open CMD window 0.25 ays 1
Browse to the location 0.25 days 1
Type MigratetoC8 <source> <target> 0.25 days 1
Look at the target directories 1 day 1
Repeat the above 4 steps for various folders(IWR,IMR,PPX,DMP etc) 1 day 1
2.75 days 1
DeployToC8 1
Two Option-Manual or auto
Copy the .zip directory from target location to c8 deploment location 0.5 days 1
Check Migration service is on C8 ( defauly is always on) 0.5 day 1
Import the deployment file 0.25 days 1
Test your reports 2 days 1
AUTO(optionl) 1
Browse to location c8/deplotC8 0.25 days 1
Type DeploytoC8 <source><target> 1 day 1
Test your reports 2 days 1
5.75 days 1
Post Migration Tasks 1
Replace IQD's with package/report source 7 days 1
Disable/Interoperate series 7 and Cognos8 5 days 1
Recreate broken reports 10 days 1
22 days 1
Total duration (approx) 60 days 1



Wednesday, August 12, 2009

Migration: Best Practice to Replace IQD's

The first challenge in the post migration phase is to gradually reduce dependence on Impromptu Query Definitions (IQD) as a data source in Transformer.
The reasons for moving away from IQD’s are:
1. IQD’s are files and hence should be avoided
2. You might need to maintain Impromptu licenses
3. If you have created IQDs using Framework Manager then you still would be building cubes using an externalized file.

The IQD’s can be replaced using Report Studio Report or a Framework Manager Package. In both the cases the source has to be pre-validated to avoid cross joins and other resource intensive sql.

Using Report Studio Reports as source for Transformer Cubes
While working with RS reports, we should keep in mind that we should not use objects like crosstab, repeaters etc. We can only work with LIST type reports.
Pros:
-The report would have been tested and simplified.
-Any change or update to the query can be managed using Framework Manager or Report Studio.
-Complex Calculations can be performed

Cons:
- Transformer only picks the fields that are in the report layout and not all those in Query.
- If the query is not dedicated for transformer use then we could be querying unnecessary columns. Layout calculations and prompts would have to be modified
-Reports specifications have to be updated if we need to add new fields to the transformer model
-We would have to ungroup columns and any counts based on grouped values would be lost too
-If the Transformer is based on ( structured and Fact queries) we would have to create multiple queries


Relational Package as a source for Transformer Cubes:
Pros:
- Importing relational sources gives more flexibility to the transformer modeler.

Cons:
-If the package is not designed well and requires updates
- Cross joins could be executed, if for example a single package with multiple namespaces was published. We could be creating dimensions from completely different namespaces.

DMR Packages as source for Transformer Cubes:
Pros:
- DMR packages offer visibility and predictability
- Developers can import dimensions from DMR package directly into the Transformer dimension map area.
-This make sure that dimension structure is standardized across the company

Cons:
-If a particular Transformer model requires a different hierarchy then we will have to maintain multiple versions of the dimension in FM
-We have to make sure Measures are allocated to all the lowest levels in all dimensions
-It takes more modeling resource/time as we have to generate dimensions and star groupings from Relational schemas.

Thursday, July 23, 2009

Free End User Training - Cognos Query Studio

We are pleased to announce a FREE online training in Cognos Query Studio.Query studio is a Cognos tool that can whip up adhoc reports and analysis for users by users.

Introduction to Query Studio
Examine Query Studio and its interface
Add and save data to ad hoc reports
View data by using appropriate charts
Create list, grouped list, and crosstab reports
Filter and sort your reports
Create detail and summary calculations
Move columns and reorder columns
New features in Cognos8.4 Query Studio
Best Practices and optimization of queries

Please mark your calendars for August 21st 2009 between 10:00-11:00 AM. Please email support@softpath.net to confirm your seat and receive a copy of the training documentation.

Wednesday, July 15, 2009

Pointers while Integrating/migrating- M&A scenario

Here are some general pointers to be kept in mind, while planning migrations, upgrades, conversions and the likes. Most of these recommendations can be made only after a thorough understanding of client systems architecture, maturity of their BI systems and their future plans. Anyways here is an appetizer.

1. Assess your architecture
2. Huge performance gains can be achieved by having dedicated servers for reporting services, batch report services, Agent and monitoring services and presentation services respectively.
3. Gateways
ISAPI is most recommended gateway. Also improve performance by adding multiple gateways.
Especially, if you have multiple locations for your servers
4. Perform Capacity Planning Exercise. Most BI tools offer pre built exercises.
5. If switching Security namespaces, you will encounter a lot of accessibility issues and will not be able to export personal content
6. If you have external vendors/clients accessing your BI portal the firewalls can cause accessibility issues. You will have to configure your firewalls and gateways to support multiple dispatchers.
7. Consider virtualization of your servers to cut costs
8. If in the near future, you plan on merging data sources multiple companies, you might have to plan back up of existing Models, packages and data source connections. Also, perform a step by step deployment into new environments or domains.

Some Challenges you might come across are:

1. Accessing cross company portals. Are you going to have 2 portals? Are you going to add users in two domains?
2. If development and support teams are in Domain A and have to support users in domain B. Developers and support Team, ticket management has to work across domains with additional privileges. This might not seem like huge liability but it can be time consuming to get it up and running
3. There will be a need for more licenses and appropriate capabilities
4. You will have to assess he hardware requirements and configuration

However, this is a very good time and opportunity to introduce new BI technologies and practices.
- You could start a BICC ( BI Competency Center)
- Upgrade to newer versions of the tools

Tuesday, June 16, 2009

Master Data Management

Authored by:Chintan

What is Master Data? Most of the IT organizations have their data dispersed and stored at various locations. But the data is shared and used by several of the applications that make up a data warehouse. For example, an ERP system has data from Customer Master, Item Master and Account Master. The master data is one of the key assets of any company.

Need of Master Data Management: Since master data is used by multiple applications, an error in master data can cause errors in all the applications that use it. For example, all important documents, bills, checks are sent to the wrong person because of an incorrect address in Customer Master. Similarly, an incorrect price in Item Master can be a marketing disaster for an organization. An incorrect account number in an Account Master can lead to huge fines. To overcome such hazards, maintaining high quality and consistent set of master data is necessary for all organizations.

What is Master Data Management (MDM)? The technology, tools and processes required to create and maintain consistent and accurate lists of master data is known as Master Data Management. MDM is a continuous, iterative process. There are many factors considered for MDM which involves requirements, priorities, resource availability, time frame and the size of the problem.

MDM Life Cycle: MDM project involves many stages as follows:
1- Identify sources of master data.
2- Identify the producers and consumers of the master data.
3- Collect and analyze metadata about master data.
4- Appoint data stewards.
5- Develop the master-data model.
6- Choose a toolset.
7- Finalize and receive approval for the process.
8- Design and implement the process.
9- Test the master data.
10- Modify the producing and consuming systems.
11- Implement the maintenance processes.

Conclusion: In recent times, creating and maintaining accurate and complete master data has become a business imperative. Both large and small businesses must develop data maintenance and governance processes and procedures, to obtain and maintain accurate master data.

Wednesday, June 10, 2009

Data Quality

In 2003 the Data Warehousing Institute calculated that bad data quality leads to a whopping loss of, approx $600 billion annually.

Data Quality Improvement is the processes and technologies involved in ensuring the conformance of data values to business requirements and acceptance criteria.

The reasons that adversely affect the data quality are:
Legacy Systems and data: Legacy Systems may/may not have validations in built into them. Legacy Systems tend to have redundant data, composite keys and referential integrity issues.
Application Evolution : Applications evolve over time and the data entry operations, client and server side validations are often overlooked resulting in bad data quality.
System Work-Around: More often than not, immediate results and often temporary measures are deployed to meet time deadlines or technology limitations.
Time Decay: The best of the systems cannot stand the test of the time. What better example than Y2K bug. Data quality deteriorates with time.
Lack of common data standards: Companies do not always invest time and resources into creating best practices, standards and checklists. Simple tasks such as having universal naming conventions can improve quality of the data.
Data Entry issues: Data entry issues are top1 reason for adversely affecting the quality of the data. If data entry is performed by customers or web based users, it is most likely that junk and misplaced information will be gathered. Even internal data entry operations are compromised because of the ‘remarks’ or ‘comments’ sections.

So how can this data be cleared up?
DIY: Do It Yourself by looking into databases, forms, applications etc. Of course, this is not the best choice. But is a beginner’s step that can lead you to a roadmap for data cleansing.

Invest in Data Cleansing Tools: Outsource to who can do it best. Yes, now we are talking business. Invest in identifying the suitable tools in the market. Here are some:
· Informatica Power Center
· Trillium Software
· Business Objects Data Integrator
· Data Flux

So, going forward how do you prevent rather than cure? Here are some ideas gathered by us.

Data Profiling – analyze the date for correctness, completeness, uniqueness, consistency, and reasonability. This must be done in the order of column profiling, dependency profiling, and then redundancy profiling.
Data Cleansing – Detect and correct corrupt or inaccurate records from a record set, table, or database. The common methods are parsing, data transformation, duplicate elimination, and many statistical methods.
Data Defect Prevention –Set up a data governance group that will take control and responsibility of the various databases and enforce/introduce data quality rules. They will conduct regular audits and data cleansing programs. They also have to take charge of the training of data entry and other personnel.


Data Quality: Authored by Vivek and Devi. Cleansed by Vai :-)

Tuesday, June 2, 2009

What's new in Cognos8.4 GO! Family

Cognos on the GO!
Go! Mobile
The newer Go! Mobile version gained location intelligence and query capabilities as well as the ability to deliver prompted, scheduled and bursted reports. The product takes advantage of GPS information with Blackberry, Symbian and similar mobile devices.

Go! Dashboards
This is an Adobe Flash-based dashboarding tool that lets users visualize information in drag-and-drop fashion. This new feature gives dashboards a slick appearance and it supports dynamic interaction. So, visualizations change as you move sliders and drill down on data.

Go! Search
The earlier version of this product was limited to searching preexisting Cognos reports. Version 8.4 delivers original query results, as well as cubes and unstructured (Word and PDF) documents and reports.

Monday, June 1, 2009

What's new in Cognos 8.4 Query Studio?

In QS 8.3, users could only filter on the fields in the report body.
-QS 8.4 allows filtering on any field in the package(works only with relational packages).
- QS 8.4 allows filtering using wildcards
That's for today. More tomorrow.

Friday, May 29, 2009

What's New in Cognos8.4

Hello everyone! We got our hands and knees dirty digging into the new Cognos8.4. Here are a series of blogs on the new features in Cognos8.4. Thank you Prachi and Amar for your contribution! You have a surprise gift on the way.This blog is supported by viewers like you. (PBS???)
Cognos has introduced new chart types and images with effects and Data lineage features.
Marimekko Chart: It is 100% stacked chart in which the width of a column is proportional to the total of the column's values. The individual segment height is a percentage of the respective column total value. It is also frequently called “Market Map” and enables Strategic Analysis. Here is an example. Step Line Charts: These are just modified Line Charts, where the data points are joined using horizontal and vertical lines. Step Line combines time and trend analysis. Following is an example of step line chart: Microcharts: These are miniature charts that can be inserted into lists and crosstab cells.Following is the snapshot of different microcharts available. Generated Images: You can define and generate an enhanced background for objects in a report. The images can be enhanced with borders, fill, drop and shadow effects. You can also apply enhanced backgrounds as a class style.
Data Lineage: This is feature is available across all studios. It is viewable in the report outputs (HTML only). It traces metadata of an item. Ex. View the lineage information of a model calculation.

Wednesday, May 20, 2009

Tips and Techniques

How to format numbers like SSN, Phone numbers, Zip codes, ISBN, Account Numbers etc?
It is very common to find SSN and other number data types stored as XXXXXXXX but the users would like to see it as XXX-XX-XXXX.
You can achieve the formatting by using substring, LTRIM and RTRIM etc. However, these functions are resource intensive. We have a better solution. Try this.
stuff(stuff([field_expression_SSN],3,0,'-'),5,0,'-')

How to save real estate on prompt page when working with Date prompts.
In the properties pane of the date prompt control, set “Select UI” property as Edit box. The default UI for date prompt is Calendar. This consumes a lot of space. The edit box also allows for typing in.

Why we shouldn’t edit data source query subjects?
Cognos generates most optimized queries if the import layer has data source query subjects. Framework Manager will not fetch metadata at runtime if it has data source query subjects available. If a Data source query subject is edited then Cognos fetches the metadata from the database each time the query is run.

Why do you see year field with values like 2016, 2032?
The reason is aggregate property was set incorrectly by the modeler. The modeler set the aggregate property to ‘Sum’. It should be unsupported or maybe count. The Usage property can be edited as attribute. You as a report developer can set the aggregate property to ‘None’ on the report.

Render Variable
Render Variable is not available on table cells or rows. To render or hide a table cell or table row use style variable.

Wednesday, April 29, 2009

Dimesnion Management IBM Cognos8 BusinessViewpoint

Why has Cognos not thought of a central metadata management that allows dimensions and models to be re-used across Framework Manager, PowerPlay Transformer, Planning Analyst, Cognos NOW and others?

Why can’t we create all the required dimensions with hierarchies and levels with the required attributes in one central location and deploy as need be? Well Just as I was working on this paper, Cognos 8.4 has already delivered a solution to some of my questions.

The answer is IBM Cognos 8 Business View Point.

IBM Cognos 8 Business Viewpoint Client provides you with the ability to nominate, subscribe to, and update master dimensions between IBM Cognos Planning - Analyst, IBM Cognos Planning - Contributor Administration Console, IBM Cognos TM1, IBM Cognos Transformer, and IBM Cognos Controller into Business Viewpoint Studio.
The above description provided by Cognos has 2 catches.
Catch1:
It is a Client tool, guess something like Framework Manager, so much for zero foot printing :-(. What this means is, we will need Admin privileges, Client installations and maintenance, Training and documentation, certified individuals, multiple requests to IT for providing multiple views. :-)
Catch2:
Business Viewpoint doesn’t talk about integrating Framework Manager. This takes me back to my question, why can’t we have a single, centralized metadata management that includes not just Dimensions but also tables, models, namespaces, query subjects and all.
Good news
- We can re use dimensions across multiple Cognos applications
- We can create “Views” of dimensions. This will help any business with dynamic, frequently changing dimension hierarchies. For example, HR hierarchies that constantly change, marketing managers who have to simultaneously work with multiple set of hierarchies across multiple locations
- Rework your hierarchies in MS Excel (can we live without this? ever?) and export them back to Viewpoint.
- It has in built change management, audit and version control with roll back features
If you want to watch a Business viewpoint promotional slick, click on this link.
http://download.boulder.ibm.com/ibmdl/pub/software/data/sw-library/cognos/demos/od_business_viewpoint/index.html
Here is a screenshot of the application Interface:

Tuesday, April 28, 2009

Create a portal in 6 easy steps!

1. Plan for the objects you need to see on the portal page. The objects could be Cognos Navigator, Cognos viewer, Cognos Search or HTML objects and add third party objects with RSS feeds.


2.Once this is done, create a Page. Click the new page button
3. Name the Page and save it to My Folders( You are just saving this Page/portal so you can edit it later, if required)

4. Now you have to plan the layout of the Portal. You have three options, you can layout your objects in 1/2/3 columns. Basically you can size your columns in percentage widths. See image below



5. Add the Objects to your Columns.

6. Name your page again and Select the Add this page to the portal tabs check box.


Go on and customize your portal...

Wednesday, April 22, 2009

What is a Snapshot in Impromptu ?

A Snapshot retrieves all the information for a report and stores it in a permanent cache on the PC. The data is stored along with the report format in the *.imr file.

A Snapshot can be used like a database. You don’t need access to the database to use a snapshot.
Snapshots do not allow you to apply less restrictive filters and add one or more columns.
We can schedule Snapshots using Cognos Scheduler

Use a snapshot when you need
- All the data for the report in one file
- To email report as a single file attachment
- Open a report without access to the catalog

Tuesday, April 21, 2009

What is Autogroup and Summarize?

This is a query property that governs how transactional data is reported. If you are looking for this property, it is on the query property sheet. See image below.
For example: If Auto group and Summarize= NO, the report or query output would be something like this: Auto group and Summarize= YES, the report or query output would be something like this:

So, auto group and summarize basically groups the lowest level in the query and summarizes the measure for the same.
Note: Auto Group and summarize until version Cognos 8.2 had some bug and would generate incorrect queries. This had to be set to “No” to validate the report. :-(

Monday, April 20, 2009

IBM Cognos8.4 PowerPlay Studio

We have heard from Cognos that Analysis Studio was designed as a data exploration tool and it was never a replacement for PowerPlay Studio. IBM Cognos has finally released the much awaited PowerPlay interface in Cognos 8.4.

Well, it is some gratification to have this interface but it is not all that PowerPlay client is. This new studio gives you the exact same interface of PowerPlay web reports. For those few sitting on the fence thinking about the switch from Series 7 to Cognos 8 have one more good reason to join the Cognos 8 club.

Here you go…see the new PowerPlay Studio for yourself.





Don’t’ believe me? Well then look at this picture




Think it is a picture from PowerPlay web reports and not Cognos 8.4?



Why do some of us like PowerPlay studio over Analysis studio? Here are some reasons we couldn’t live without.

1. We can hide columns in PowerPlay rather than having to zero suppress in Analysis Studio.


2. The various zero suppression options



3. Not to mention Custom exception highlighting
4. We can actually print reports with title, dimension line filter and other variables.




Friday, April 17, 2009

Dynamic display of columns

How to display columns in a list based on prompt selection. For example display Revenue or Gross profit based on user selection?

Answer:
There are multiple ways to achieve this. This could be achieved by using conditional statements, using variables or conditional blocks. We are proposing here the most efficient process of all.

1. Create a List report with the required columns
2. Create a prompt with Static choices as ‘Revenue’ and ‘Gross profit’
3. Add a query calculation with the expression #prompt(‘Parameter Name’, ’token’)# in your list

You are all set!!!!!!!!!!!!!!!!!!!!!
Note: we will write more about token prompting in the next blogs

Thursday, April 16, 2009

Best Practices Manual for optmized Modeling using Framework Manager

Best Practices Manual for optimized Modeling using Framework Manager


One Model Versus Too Many Models
It is a common tendency to create one large model with all the possible reporting elements within it, leaving the modeler with a single object to maintain.
Clients usually clutter the model with multiple data sources; group all the metadata for all business units to avoid maintaining multiple models, this design method is unsustainable in the long run.

This can cause more problems than are anticipated at first because; as the model grows so does dependency on this model. modeling more doesn’t mean better and it’s better to create smaller more manageable models rather than one large model with everything in it.

It is recommended to design multiple models based on reporting business units or data sources. For example an HR project should have a Model designed all for itself rather derive packages from enterprise performance model.

Size of the Model
While currently there are no documented limits to the size of a model, at some point a model designer needs to evaluate the breadth of the information being put into his model. They need to keep in mind that the model they create will at some point be used to create a package and in turn
loaded into one of the studios.

A good rule of thumb is if the model which is being used is too cluttered when trying to view it in the diagram viewer in Framework Manager then you likely are on your way to a complex model. Package size affects the users both in the UI as it takes time to navigate the structure to find the desired data elements, as well as the length of time it takes to load the package into the studio. Before the package can be consumed by the studio it is retrieved from the content store and then run through a SAX parser, and in turn an RTM file is created. The initial creating and loading of this rtm file takes some time and for the first user accessing this package there will be a hit against the time it takes to run the report or load the package into the studio while this rtm file is created.

Once created it is reused for all users accessing the same package, however this file is created on each of the dispatcher boxes in a multi server configuration as the particular request is sent to each of the dispatchers. Limiting the size of the package will limit the access time for the initial user and in turn improve the user experience.

Structure
The best practice is to switch to a three tier architecture. Three tier architecture offers the flexibility in designing and maintaining the model.
The three tier involves:
Database Layer The database layer should contain a base layer that matches the physical database tables and elements. By keeping the database layer as close as possible to the actual database it would be easier to propagate changes in the future. Auto Joins should not be enabled at this level. This layer shall provide any new/existing developer with accurate view of the existing database. Adding joins, manipulating joins or changing business names, creating aliases & tables (using sql) at this level will lead to confusion in terms of maintenance. Because with time, modelers will lose the distinction between tables on the database and virtual tables and views that were created by modelers.
Business Layer is the sandbox for the modeler where the modeler can create joins, create alias tables, manipulate and apply additional filters, create calculations and default prompts and filters. This sandbox allows the modeler to work group and organize data. Pick the best way to model the data between dimensional and relational. This layer also allows the modeler to maintain business names. This layer allows the flexibility to support any future changes to naming conventions and business names. Changes to calculations can be done once and the change shall be reflected across multiple reports.

Presentation Layer is the view that is being offered to the users and report consumers. In this layer we design a query subjects with all the required fields from multiple tables for each reporting business group. You could provide team specific business names for certain calculations etc. For example an amount field needs to be displayed as a credit on Investor accounting reports and as a debit on the Corporate accounting reports. The same amount field may be labeled differently for each of the business reporting units.

Package Design
The model size affects the modeler, the package size affects the user interaction on the Studios. Especially, fully loaded packages take very long to load in Query Studio.
It is a good practice to hide or exclude any tables or views that are not required by a particular package. This saves load times in query Studio and refresh&Validate in Report Studio.

- Always maintain distinct packages for relational and dimensional cube based datasources.
- Do not import cube(s) into existing relational model to publish as a package. Always maintain distinct Cube models for the following reasons

The Categories stored in the cube metadata change and hence require you to publish the cube package which in turn will overwrite your drill thru definitions. This is only valid till Cognos 8.2. Cognso8.3 allows you to publish cube directly(with necessary domain privileges) to Cognos Connection without having to use Framework Manager.

When you open a package which has both a relational and cube datasource in it, both data sources require an RTM file to be created, regardless of whether the user is accessing both data sources in their report. There can be a delay in the time it takes to open a package with multiple data sources (Ie relational or cubes) involved, as it has to create the RTM files for each datasource.

Use of Namespaces
It is recommended that you multiple namespaces in the Business Layer by each of your reporting sub divisions.
The advantages of having namespaces are that, you limit the number of required tables, you can create report specific joins, reduce the number of alias tables created,. Namespaces make sure that only joins represented within the namespace are used even if other joins exist between the tables.

Pareto principle
Pareto’s principle applies to reporting as well. 80% of reporting is generated from 20% of the data/tables or fields. So it is very important to identify, fine tune and constantly monitor this very important 20 % of the table or views.
Also it is a good practice to always provide the most frequently used fields in a query subjects at the top of the query subject. Always store the identifiers at the top of the query subject, this will subtly force the user to use key fields, then dimensions and then decide on the Facts that need to be aggregated.
This is even a good practice in development where you end up running multiple test queries against fact tables. This will help the adhoc report developers to focus on Indexes add dimension to see if they have got required fields and calculations before they pick fields initiating queries with no indexes resulting in full Outer joins.
SQL
Use the With clause for better query performance if the request is restricted to functionality supported by the underlying data source software. When a request uses functionality that is not supported by the data source, using the With clause may cause additional decomposition of the query, which can lead to degraded performance. In this case, not using the With clause may generate a better set of queries to the underlying data source.
Because each query subject is sent to the data source as a separate statement rather than being optimized by Framework Manager, performance is slower. Therefore, in choosing between native SQL and pass-through SQL, you must decide which is more important: performance or using SQL that is not permitted in a subquery.
Generally, you should use pass-through SQL only if you must create a query subject that contains constructs that are specific to a data source and that cannot be used inside a derived table, such as in a With or OrderBy clause.
The SQL Generation type of a query subject can be set to either As View or Minimized. By default, it is set to Minimized.
When the generation type is set to Minimized, the generated SQL contains only the minimal set of tables and joins needed to obtain values for the selected query items.
When the generation type is set to As View, Framework Manager generates queries that contain the full SQL statement that defined the query subject. Use As View when you want to ensure that the query is run as a block. The SQL is treated as a view. For example, you want the query to return the same number of rows each time that it is run.
The SQL Generation setting has no effect on a query subject with determinants specified for it.
Using minimized SQL improves performance, resulting in a query that runs significantly faster. Generating minimized SQL is especially beneficial for query subjects that represent dimension tables. By using a single model query subject to model a dimension, you can benefit from small SQL queries that run significantly faster.
For example, the SQL Generation Type of the following query subject is As View. Note that this query subject contains a nested select statement.
Usage property
It is very important to define the usage property of each query item after importing it from the data source. In Cognos all number datatypes are defined as facts by default, which might not be the case always.
As a best practice, a modeler should always check the usage properties and define it correctly as per need both on query items and calculated query items. For ex, year(date()) will result in a number data type with rollup aggregate property of sum which when grouped in a report will give incorrect results.
Joins
Join paths are best established based on the reporting needs, reporting frequency and report retrieval limitations. There is no hard and fast rule to resolving loop joins. There are 4 basic resolutions
- Break the join
- Create alias tables with different join strategies
- Use the join expression editor to specify the join
- Modify SQL
Each of the resolutions is done for a different reason and may have some issues associated with it. Determine the best resolution for your situation by analyzing the data with regards to the results required from the join structure.

To do or not to do Multiple Data source connections
Multiple data source connections allows users to execute reports using different database but with identical metadata.
They facilitate the migration from one environment to another and maintain the integrity of the project.
This is a very flexible option provided by Cognos. In practice however there are many bugs associated with multiple data source connections. To begin, it definitely takes longer to cache metadata as it now has job of caching multiple database metadata. When switching across different databases or cubes it takes longer to process and you will experience report hang-ups and metadata refresh hang ups in Report studio design mode.
Multiple data source connections have been used access one of the two databases. However, since the metadata is not identical you could foresee many issues while generating the reports.

Quick Tips
- Move towards a 3 tier architecture
- Organize your model using folders and namespaces
- Create Namespaces instead of Alias tables
- Keep metadata simple by importing only required fields
- Upgrade to Cognos 8.3 or higher
- Occasionally disable “Allow usage of local Cache” property and publish your package
- Run your queries against plan table and monitor the index usage, join paths and query cost