Posted on Leave a comment

SAP BusinessObjects Sizing, Hardware & Licensing Models (Legacy)

This article concerns the sizing of SAP BusinessObjects software and how it relates to the choice of hardware to support it and licensing models.

In terms of SAP BusinessObjects Enterprise XI 3.1 it is important to make an informed decision on licensing and hardware based on a SAP BusinessObjects sizing exercise. The sizing exercise involves considerations for the specific Business Intelligence software products chosen to be implemented on Business Objects Enterprise and an analysis of the quantity and type of user that will be active on the BI System. A series of calculations are then performed based on the findings.

As an example consider Web Intelligence. One single Web Intelligence Report Server (service) running on Business Objects Enterprise can support between twenty five and forty users making simultaneous requests depending upon the report complexity. The Web Intelligence Report Server requires one CPU to support it. If more than forty users stress the Web Intelligence Report Server simultaneously there will be deterioration in performance. This can be rectified by configuring more Web Intelligence Report Servers and adding CPU or limiting the number of users. This principle holds true for several of the SAP BusinessObjects Servers running on SAP BusinessObjects Enterprise.

Whilst we can scale the system retrospectively in this way it is better to carry out the sizing upfront. The benefits are that it will provide an indication to number of SAP BusinessObjects Servers to configure, the number of CPUs required to support them, and the disk space that your BI System will require.

So sizing can be a useful guide to hardware procurement. However, it is also a guide to the most effective type of licensing to procure i.e. CPU based, Named User, or a combination of both. If sufficient licensing is not procured the BI System performance will suffer, but conversely no one wants to procure more costly software licensing than is necessary.

Posted on Leave a comment

Business Intelligence Platform Scalability

SAP Business Objects Enterprise XI 3.1 is a multitier, server-based program consisting of a series of logical servers.  Note that the logical servers are not actually physical servers. The logical servers run as services such as Windows Services and consequently can be installed on a single machine or distributed across multiple machines. The logical servers communicate with each through the Common Object Request Broker Architecture (CORBA). As the number of users and user requests being made increase place increased burden on the SAP Business Objects system more SAP Business Objects services can be installed and configured. This is known as scaling the Business Intelligence platform. The additional services can be installed on existing machines, however, it may be also necessary to increase the available CPU or add new machines to the cluster. SAP Business Objects Enterprise can be installed on several platforms including Windows, Unix and Linux. Check the SAP Business Objects Supported Platform Guide for more information.

Posted on Leave a comment

The Aggregate Advantage

This article is about the advantage of using aggregate tables in your data warehouse when using Web Intelligence as your reporting tool. Whilst it is possible to configure a SAP BusinessObjects universe for use with Web Intelligence on low level detail database tables such as transaction tables, it is rarely a good idea to do so.

The primary purpose of Web Intelligence is to allow users to perform ad hoc analysis, speedily slicing and dicing data and drilling up and down through it. The speed of this on the fly analysis is possible due to the architecture of Web Intelligence. When a query is executed Web Intelligence creates a small cube of cached data referred to as a microcube. Note the word ‘microcube’. It is not a macrocube, it is not an OLAP cube and Web Intelligence is not an OLAP tool, so we must not try to use it as one. In our experience we have found it best not to overload the microcube with enormous amounts of cached data.

You should go to the transaction tables with Web Intelligence only when you really need to get to that level of detail. If your query returns 10000 rows of data from a transaction table are you really going to scroll through all those rows for analysis purposes? If your answer really is yes, you might want to consider using Crystal Reports instead because it has a different underlying architecture more appropriate to that type of reporting.

If your answer is no, then let’s consider a reporting example. In this example your reporting requirement is to view your sales revenue figures by location, by product category, by year and by month. Now you could run your query against transaction tables and then use Web Intelligence to sum up your revenue. This approach has three weaknesses. The first is that the actual SQL code generated by your query has to execute against very big database tables and so may be slow. The second problem is that you may be returning thousands of rows to your microcube. The third issue is that you are then asking the Web Intelligence program to sum up or aggregate these thousands of rows of data and in doing so you are failing to exploit the power of your database server and instead stressing the Web Intelligence report engine.

There is a better way; your report is a candidate for aggregate tables as it is using summary data. So what are aggregate tables? Simplistically they are tables that hold data that has been summed up or aggregated in some way. Aggregated tables are not necessarily a simple summing up; they can also consist of other aggregates such as pre-calculated ratios, but the concept is a simple one. You take a transaction table, maybe holding 100 million rows of data, then using the Data Integrator tool you extract the data, aggregate it, and then load it into another table holding the summary data. In our example the table would be sales revenue by location, by product category, by year, by month. Now let’s say that because of the aggregation the summary table only holds 1 million rows of consolidated data, what effect does this have?

Firstly, the SQL query is faster because there is much less data in the table to query. Secondly, you are not asking Web Intelligence to do the summing up; that already happened behind the scenes in the data load. Thirdly, because the data is already summed up, significantly less data needs to be cached in the microcube, and consequently the Web Intelligence performance will be excellent.

This article is only a simplistic introduction to aggregation and clearly there is much more to consider than this. However, it has served its purpose, which is to point you in the right direction.