Monday, March 31, 2008

THIRD STEP FOR IDEAL IMPLEMENTATION

HI All,
Here i try to analyze different Data loading methods available...and some other debatable issues..
Best Practices:
1.Always have the rightly balanced teams.
A typical ratio for medium complexity process would be(1:1:2 )(Functional:Technical Designer:Developers).
Techical designer should be a guy who understands both the importance of business and technical limitations.
And the rule should be , technical designer should always be involved at the Functional Discussions with the customer.
This always help for not committing things which are not technically feasible..(Nothing is impossible ..but the effort needed to make it happen is worth or not??)

2.The other big Question is how should I divide my project team???
Method A>Functional,Technical( With in technical people by technologies they are good at like ..reports ,Forms & OAF ...)
Method B> One Process one team.(Atleast team size should be more than 20 to implement this)

There are advantages & disadvantages of both the ways..I am part of implementations which are done both ways....Lets me list some of them...
Method A:
Advantages: Less time is taken for development of components..(As technical people are doing what they are good at..)
Disadvantages:
1.I don't know whether I am correct or not in quoting this but the truth is you will find a clear divide between the two teams..And once things start going wrong people start blaming each other...I think every one might have experienced it already..
2.For technical people it is even bigger loss because you work on different components belonging to different process and never understand how your component fits in the overall solution. At the end of day it will be just a
piece of code...
3.Any change in the process during the course of development is very difficult to handle, as there will be inter dependencies among components

Method B:
Advantages:
1.Every one will feel the ownership of the process and better team spirit.
2.Technical team will also have better understanding of the processes and will able to implement the changes faster (as In a development project change is unavoidable ) as they have all the components with them

Disadvantages:
1.Development time might be a bit long as the technical people in u r team might not have expertise in all the technologies involved in the process

My views might be a bit biased as I am strong supporter for method B..

Technical:
Coming to our today's technical discussion ...data loading..This will be one of the first steps(development) and the last step(before live) of a implementation..
The typical way of data loading is a three step process.
1.To load data from the legacy/Flat files to temporary tables.
2..Perform all the validations on the temporary table data and load it to the open interfaces or API's
3.Run the Open interface Concurrent programs/API's to load data to the standard tables.

First let me figure ways of data movement in oracle apps..
1.Open Interfaces-- used for live interfaces(every day activity) & one time loading
2.API's--Almost same as Open interfaces but are easy to handle.(validation & integrity logic is taken care by them)
3.Out Bound Interfaces--Required if we integrate oracle apps with third party or legacy systems
4.EDI--automation process..we will talk about them later

For loading data
SQL Loader:
1.Used when data is available in flat files(tab delimited,Comma delimited)
2.Faster & Easier way of loading data
3.Can use sql functions to modify data

DB Links: if the legacy systems is on oracle data base the best thing is to get access to the data customer want to import through db links

Data loader tool : These are third party tools used for loading ,which automates the process of punching data.There are very much user friendly and not much technical expertise is required to work with them.
But the biggest disadvantage of these tools is they are slow.If you have huge data it is not advisable to use them(unless u r patient enough to see thru :-) )
The data loaded will be valid because it is as good as manual punching..

XML: Oracle Provides Apis to import XML data and to export data into XML.This should be most convenient way for data interaction as most of the other technology systems are able to parse XML data easily.
There are some limitations(can be easily overcome) also for these in oracle like while importing XML data into oracle tables oracle can't parse huge files

WEBADI: These are oracle provided templates for data loading into some standard interfaces.Easy to use. we can create the custom api's and use for data loading..i felt this is one of the best ways of loading data

UTL_FILE: it is PL/SQL way of loading data into oracle tables.This packages gives us the API to read and data into flat files.This method is very useful when the data to be loaded is less and more validations are required
before loading.One of the limitations of this pacakge it reads data in lines and the maximum length it can read is 1022 charecters..In writing data to files it can write only 32K in one shot..later we need to close the file
and reopen it again..

External Table:This concept concept comes from Oracle 9i onwards.This is one of the easiest way of loading data.Once you create the external table you can simple use the select stament to query the table.
On performance basis this is as good as Direct path loading of SQLLDR.(Technical People Give a try for this...)

Caution:Disable any indexes on the table before loading data..other wise it will slow down the process of loading data.

On summary for all conversions(one time data movement) use External Tables or SQL Loader
For interfaces use PL/sql,dblinks or XML
HI all i have not used SQL loader much..most of the time i have used External tables ,UTL_FIle and XML.So people who has much exposure can come up with any limitations or advantages of it..
Plz let me know if any thing is wrong or any other suggestions to make this better....

BARCODE in XML Reports

For generating reports with barcodes

1) You should have barcode font in your system i.e. Font name FREE3OF9 (Find as attachment)

2) Copy and paste the attached “Free 3of9” font in your machine directories “C:\WINDOWS\Fonts” and $JAVA_HOME\ lib\fonts

Example Java path “C:\Program Files\Java\j2re1.4.2_10\lib\fonts”

3) There is one configuration file i.e. file name is “xdo” as attachment)

4) Copy and paste the configuration file in your “XML Publisher Desktop\Template Builder for Word\config”

Example “C:\Program Files\Oracle\XML Publisher Desktop\Template Builder for Word\config”
?xml version = '5.5.0' encoding = 'UTF-8'?
config version="1.0.0" xmlns="http://xmlns.oracle.com/oxp/config/"
fonts
font family="Free 3 of 9" style="normal" weight="normal"
truetype path="C:\Program Files\Java\j2re1.4.2_12\lib\fonts\FREE3OF9.TTF"/
/font
/fonts
/config

include the tags..i removed the tags.as some problems when posting..check the configuration file once..

5) Once you have done the above setups, you can see the “Free 3of 9” font type under your MSWORD font.

6) Which column you have to display as barcode, just put this font to that column.


One thing to take care is convert the data to upper case before generating the XML
do it at the SQL level only..

Friday, March 28, 2008

White Paper on Oracle Apps Migration Project

Things to take care in a migration project..
These are all my personal observations if any one has anything new to add plz put out a mail i will incorporate them also..

Now a days we are coming across many migration projects..comparitviely these are supposed to be easy and straight forward..
But if we take care of few more things..it would be even smoother...

What is a migration project.??
It is moving from a product of lower version to a product of higher version(the other way is also called migration only)

What will the customer expect??
He expects a higer performance from the system
Added new Functionality
Better support from oracle
The system is supposed to work the same way as it operates..But look and feel might be a bit different..Functionality should remain intact

What are the major challenges for it?
1.The amout of customization in the legacy system
2.Type of Customization--whether custom built modules /Standard process customised
3.Integration with other systems
4.Support of new environments
5.Amount of change in the product from old to new version
6.Whether Standards Followed while customising the standard obejcts like (standard reports,standard forms,workflow..)

What are different phases in it?
1.MIgration phase..First we will take a clone of the instance migrate the applications to the new version with the old data)
Oracle provide the scripts to migrate the data and the software will install the new application objects.
2.Optimised migration--We redo the migration phase again in short span of time to calculate the exact cut over time
3.MTP--Movement to production

What all we need to take care???

Environmental change: some time the old systems might be in a different environment and new system will be on a different environment.
like old one in AIX and new ones in red hat linux.
One of the problem to expect is some commands in AIX might not work in Linux environments.
So if we have shell scripts in or host script files..those need to be checked and changed for the new environment

Database Layer Change:As the product is migrated there might be a database change happend like new not null columns getting added up
so incase you have any direct inserts happening into the oracle tables even interfaces they might need to be corrected
and values need to be populated to the new not null columns

Standard Report Modifications:Because the upgrade will get new application objects any modification done to the standard report objects will be lost.
it is better to rename those objects and re-register them to keep intact the object for future migration

Custom reports migration :For reports we need to just open the report in the new version of the report builder in case the report builder version difference exists
use shift+cntrl+k to compile the objects and save it.This should make the reports work.
But from my personal experience we need to run all the reports and data validation should be done for all..
This might be tedious task if there are huge number of reports ,But it has to be done.
Project plan should include the testing of each and every report(Just data level validation)
One more important thing i heard the compilation of the report builder will not validate the query ..
so any column changes will not get reflected at migration time they can only be find out at run time
Standard Form object migration: Migration will take care of the upgradation of standard objects. Hope that there are no customisation at
the code level for the standard objects.In case if there are any try to redo the customisation using the new feautures
like forms personalizations,custom.pll.One more important thing is before doing the customisation check whether those are really required
in the new system also..Even the customer process also might have changed ..so check with the customer
also before redoiing them .

Custom Form Objects Migration:This is not as simple as the reports..There is a FLINT60(upto 11.5.10.2) or Corresponding utility available to
upgrade the forms from the previous version to the new version.The major road block is if the forms are not developed
as per oracle application standards.Like property paletter not defined,seperate button to popup lov's
and other..In that case the form has to migrated using the flint60 utility and manually changes need to be made to have
the same look and feel of the new version.
For detailed steps of using flint60 and custom form migration check my blog http://oracleappstechnicalworld.blogspot.com/

Legacy Sytem Integration:This will be one of the big task..The first step we need to do is figure out how the legacy systems are integrated
1.Through File system
2.Through DB Links
3.Third party softwares
1.For file system integration check the directory permission and UTL_DIR_PATH variables in the legacy and new system
2.For dblinks one check whether the dblinks can be created between the new database version and the database version of the legacy system.Better to confirm at the assesment stage itself in case not, time need to be allocated for alternative solution implementation
3.Check througthly the compatabilities in case some thing like this exists

Pro*c Programs : The pro*c files need to be recompiled on the new instance.Pro*C Environment need to be set on the new application .If there any custom pro*c programs Pro*c enviromental
setup should be a task in the migration.process.


General Observations: One important thing to remember is the migration will get overwrite all the standard objects and standard application data ex:FND Messages.Suppose in the old instance you have changed the standard message text , then that change will be lost in the migration process.Those changes has to be redone.

Tuesday, March 18, 2008

Useful Information about LOG & OUT Files

Recently we came around a scenario whether the naming convention of the out files need to be changed.After some R&D we find some good document regarding this..

Where do concurrent request or manager logfiles and output files go?
The concurrent manager first looks for the environment variable
$APPLCSF. If this is set, it creates a path using two other
environment variables: $APPLLOG and $APPLOUT
It places log files in $APPLCSF/$APPLLOG, output files go in
$APPLCSF/$APPLOUT

So for example, if you have this environment set:
$APPLCSF = /u01/appl/common
$APPLLOG = log
$APPLOUT = out

The concurrent manager will place log files in /u01/appl/common/log,
and output files in /u01/appl/common/out
Note that $APPLCSF must be a full, absolute path, and the other two
are directory names.

If $APPLCSF is not set, it places the files under the product top of
the application associated with the request. For example, a PO report
would go under $PO_TOP/$APPLLOG and $PO_TOP/$APPLOUT
Logfiles go to: /u01/appl/po/9.0/log
Output files to: /u01/appl/po/9.0/out
All these directories must exist and have the correct permissions.

Note that all concurrent requests produce a log file, but not necessarily
an output file.
Concurrent manager logfiles follow the same convention, and will be
found in the $APPLLOG directory



What are the logfile and output file naming conventions?
Request logfiles: l.req


Output files: If $APPCPNAM is not set: .
If $APPCPNAM = REQID: o.out
If $APPCPNAM = USER: .out


Where: = The request id of the concurrent request
And: = The id of the user that submitted the request


Manager logfiles:


ICM logfile: Default is std.mgr, can be changed with the mgrname
startup parameter
Concurrent manager log: w.mgr
Transaction manager log: t.mgr
Conflict Resolution manager log: c.mgr


Where: is the concurrent process id of the manager