Q.How do you handle decimal places while importing a flatfile into informatica?
A.While importing flat file definetion just specify the scale for a neumaric data type. in the mapping, the flat file source supports only number datatype(no decimal and integer). In the SQ associated with that source will have a data type as decimal for that number port of the source.source ->number datatype port ->SQ -> decimal datatype.Integer is not supported. hence decimal is taken care.
A.While importing flat file definetion just specify the scale for a neumaric data type. in the mapping, the flat file source supports only number datatype(no decimal and integer). In the SQ associated with that source will have a data type as decimal for that number port of the source.source ->number datatype port ->SQ -> decimal datatype.Integer is not supported. hence decimal is taken care.
Q.How do you handle decimal places while importing a flatfile into informatica?
A.while importing flat file definetion just specify the scale for a neumaric data type. in the mapping, the flat file source supports only number datatype(no decimal and integer). In the SQ associated with that source will have a data type as decimal for that number port of the source.
source ->number datatype port ->SQ -> decimal datatype.Integer is not supported. hence decimal is taken care.
A.while importing flat file definetion just specify the scale for a neumaric data type. in the mapping, the flat file source supports only number datatype(no decimal and integer). In the SQ associated with that source will have a data type as decimal for that number port of the source.
source ->number datatype port ->SQ -> decimal datatype.Integer is not supported. hence decimal is taken care.
Q.What is parameter file?
A.When you start a workflow, you can optionally enter the directory and name of a parameter file. The Informatica Server runs the workflow using the parameters in the file you specify.For UNIX shell users, enclose the parameter file name in single quotes:
-paramfile '$PMRootDir/myfile.txt'.
For Windows command prompt users, the parameter file name cannot have beginning or trailing spaces. If the name includes spaces, enclose the file name in double quotes:-paramfile ”$PMRootDir\my file.txt”
A.When you start a workflow, you can optionally enter the directory and name of a parameter file. The Informatica Server runs the workflow using the parameters in the file you specify.For UNIX shell users, enclose the parameter file name in single quotes:
-paramfile '$PMRootDir/myfile.txt'.
For Windows command prompt users, the parameter file name cannot have beginning or trailing spaces. If the name includes spaces, enclose the file name in double quotes:-paramfile ”$PMRootDir\my file.txt”
Q.What is aggregate cache in aggregator transforamtion?
A.When you run a workflow that uses an Aggregator transformation, the Informatica Server creates index and data caches in memory to process the transformation.If the Informatica Server requires more space, it stores overflow values in cache files.
A.When you run a workflow that uses an Aggregator transformation, the Informatica Server creates index and data caches in memory to process the transformation.If the Informatica Server requires more space, it stores overflow values in cache files.
Q.Why you use repository connectivity?
When you edit,schedule the sesion each time,informatica server directly communicates the repository to check whether or not the session and users are valid.All the metadata of sessions and mappings will be stored in repository.
When you edit,schedule the sesion each time,informatica server directly communicates the repository to check whether or not the session and users are valid.All the metadata of sessions and mappings will be stored in repository.
Q.Briefly explian the Versioning Concept in Power Center 7.1.
A.When you create a version of a folder referenced by shortcuts, all shortcuts continue to reference their original object in the original version. They do not automatically update to the current folder version.For example, if you have a shortcut to a source definition in the Marketing folder, version 1.0.0, then you create a new folder version, 1.5.0, the shortcut continues to point to the source definition in version 1.0.0.
A.When you create a version of a folder referenced by shortcuts, all shortcuts continue to reference their original object in the original version. They do not automatically update to the current folder version.For example, if you have a shortcut to a source definition in the Marketing folder, version 1.0.0, then you create a new folder version, 1.5.0, the shortcut continues to point to the source definition in version 1.0.0.
Maintaining versions of shared folders can result in shortcuts pointing to different versions of the folder. Though shortcuts to different versions do not affect the server, they might prove more difficult to maintain. To avoid this, you can recreate shortcuts pointing to earlier versions, but this solution is not practical for much-used objects. Therefore, when possible, do not version folders referenced by shortcuts.
Q.What is source qualifier transformation?
A.When you add a relational or a flat file source definition to a mapping, you need to connect it to a Source Qualifier transformation. The Source Qualifier represents the rows that the Informatica Server reads when it executes a session.Join data originating from the same source database. You can join two or more tables with primary-foreign key relationships by linking the sources to one Source Qualifier.Filter records when the Informatica Server reads source data. If you include a filter condition, the Informatica Server adds a WHERE clause to the default query.
A.When you add a relational or a flat file source definition to a mapping, you need to connect it to a Source Qualifier transformation. The Source Qualifier represents the rows that the Informatica Server reads when it executes a session.Join data originating from the same source database. You can join two or more tables with primary-foreign key relationships by linking the sources to one Source Qualifier.Filter records when the Informatica Server reads source data. If you include a filter condition, the Informatica Server adds a WHERE clause to the default query.
Specify an outer join rather than the default inner join. If you include a user-defined join, the Informatica Server replaces the join information specified by the metadata in the SQL query.Specify sorted ports. If you specify a number for sorted ports, the Informatica Server adds an ORDER BY clause to the default SQL query.Select only distinct values from the source. If you choose Select Distinct, the Informatica Server adds a SELECT DISTINCT statement to the default SQL query.Create a custom query to issue a special SELECT statement for the Informatica Server to read source data. For example, you might use a custom query to perform aggregate calculations or execute a stored procedure.
Q.What is a source qualifier?-
A.When you add a relational or a flat file source definition to a mapping, you need to connect it to a Source Qualifier transformation. The Source Qualifier represents the rows that the Informatica Server reads when it executes a session.
A.When you add a relational or a flat file source definition to a mapping, you need to connect it to a Source Qualifier transformation. The Source Qualifier represents the rows that the Informatica Server reads when it executes a session.
Q.What is source qualifier transformation?
A.When you add a relational or a flat file source definition to a maping,U need to connect it to a source qualifer transformation.The source qualifier transformation represnets the records that the informatica server reads when it runs a session.
A.When you add a relational or a flat file source definition to a maping,U need to connect it to a source qualifer transformation.The source qualifier transformation represnets the records that the informatica server reads when it runs a session.
Q.What is incremantal aggregation?
A.When using incremental aggregation, you apply captured changes in the source to aggregate calculations in a session. If the source changes only incrementally and you can capture changes, you can configure the session to process only those changes. This allows the Informatica Server to update your target incrementally, rather than forcing it to process the entire source and recalculate the same calculations each time you run the session.
A.When using incremental aggregation, you apply captured changes in the source to aggregate calculations in a session. If the source changes only incrementally and you can capture changes, you can configure the session to process only those changes. This allows the Informatica Server to update your target incrementally, rather than forcing it to process the entire source and recalculate the same calculations each time you run the session.
Q.If you are workflow is running slow in informatica. Where do you start trouble shooting and what are the steps you follow?
A.When the work flow is running slowly u have to find out the bottlenecks in this order target source mapping session system.
A.When the work flow is running slowly u have to find out the bottlenecks in this order target source mapping session system.
Q.What is exact use of 'Online' and 'Offline' server connect Options while defining Work flow in Work flow monitor?
A.The system hangs when 'Online' server connect option. The Informatica is installed on a Personal laptop.When the repo is up and the PMSERVER is also up, workflow monitor always will be connected on-line.When PMserver is down and the repo is still up we will be prompted for an off-line connection with which we can just monitor the workflows.
A.The system hangs when 'Online' server connect option. The Informatica is installed on a Personal laptop.When the repo is up and the PMSERVER is also up, workflow monitor always will be connected on-line.When PMserver is down and the repo is still up we will be prompted for an off-line connection with which we can just monitor the workflows.
Q.Explain about perform recovery?
When the Informatica Server starts a recovery session, it reads the OPB_SRVR_RECOVERY table and notes the row ID of the last row committed to the target database.The Informatica Server then reads all sources again and starts processing from the next row ID. For example, if the Informatica Server commits 10,000 rows before the session fails, when you run recovery, the Informatica Server bypasses the rows up to 10,000 and starts loading with row 10,001.
By default, Perform Recovery is disabled in the Informatica Server setup. You must enable Recovery in the Informatica Server setup before you run a session so the Informatica Server can create and/or write entries in the OPB_SRVR_RECOVERY table.
When the Informatica Server starts a recovery session, it reads the OPB_SRVR_RECOVERY table and notes the row ID of the last row committed to the target database.The Informatica Server then reads all sources again and starts processing from the next row ID. For example, if the Informatica Server commits 10,000 rows before the session fails, when you run recovery, the Informatica Server bypasses the rows up to 10,000 and starts loading with row 10,001.
By default, Perform Recovery is disabled in the Informatica Server setup. You must enable Recovery in the Informatica Server setup before you run a session so the Informatica Server can create and/or write entries in the OPB_SRVR_RECOVERY table.
Q.How the informatica server sorts the string values in Ranktransformation?
A.When the informatica server runs in the ASCII data movement mode it sorts session data using Binary sortorder.If you configure the seeion to use a binary sort order,theinformatica server caluculates the binary value of each string and returns the specified number of rows with the higest binary values for the string.
A.When the informatica server runs in the ASCII data movement mode it sorts session data using Binary sortorder.If you configure the seeion to use a binary sort order,theinformatica server caluculates the binary value of each string and returns the specified number of rows with the higest binary values for the string.
Q.In which circumstances that informatica server creates Reject files?
A.When it encounters the
1.DD_Reject in update strategy transformation.
2.Violates database constraint
3.Filed in the rows was truncated or overflowed.
A.When it encounters the
1.DD_Reject in update strategy transformation.
2.Violates database constraint
3.Filed in the rows was truncated or overflowed.
Q.How the informatica server sorts the string values in Ranktransformation?
A.When Informatica Server runs in UNICODE data movement mode ,then it uses the sort order configured in session properties.
A.When Informatica Server runs in UNICODE data movement mode ,then it uses the sort order configured in session properties.
Q.What are the joiner caches?
When a Joiner transformation occurs in a session, the Informatica Server reads all the records from the master source and builds index and data caches based on the master rows.After building the caches, the Joiner transformation reads records from the detail source and perform joins.When do u we use dynamic cache and when do we use static cache in an connected and unconnected lookup transformation We use dynamic cache only for connected lookup. We use dynamic cache to check whether the record already exists in the target table are not. And depending on that, we insert,update or delete the records using update strategy. Static cache is the default cache in both connected and unconnected. If u select static cache on lookup table in infa, it own't update the cache and the row in the cache remain constant. We use this to check the results and also to update slowly changing records.
When a Joiner transformation occurs in a session, the Informatica Server reads all the records from the master source and builds index and data caches based on the master rows.After building the caches, the Joiner transformation reads records from the detail source and perform joins.When do u we use dynamic cache and when do we use static cache in an connected and unconnected lookup transformation We use dynamic cache only for connected lookup. We use dynamic cache to check whether the record already exists in the target table are not. And depending on that, we insert,update or delete the records using update strategy. Static cache is the default cache in both connected and unconnected. If u select static cache on lookup table in infa, it own't update the cache and the row in the cache remain constant. We use this to check the results and also to update slowly changing records.
Q.What are variable ports and list two situations when they can be used?
A.We have mainly tree ports Inport, Outport, Variable port. Inport represents data is flowing into transformation. Outport is used when data is mapped to next transformation. Variable port is used when we mathematical caluculations are required. If any addition i will be more than happy if you can share.
A.We have mainly tree ports Inport, Outport, Variable port. Inport represents data is flowing into transformation. Outport is used when data is mapped to next transformation. Variable port is used when we mathematical caluculations are required. If any addition i will be more than happy if you can share.
Q.How to load time dimension?
A.We can use SCD Type 1/2/3 to load any Dimensions based on the requirement.where do we use MQ series source qualifier, application multi group source qualifier. just give an example for a better understanding We can use a MQSeries SQ when we have a MQ messaging system as source(queue).When there is need to extract data from a Queue, which will basically have messages in XML format, we will use a JMS or a MQ SQ depending on the messaging system. If you have a TIBCO EMS Queue, use a JMS source and JMS SQ and an XML Parser, or if you have a MQ series queue, then use a MQ SQ which will be associated with a Flat file or a Cobal file.
A.We can use SCD Type 1/2/3 to load any Dimensions based on the requirement.where do we use MQ series source qualifier, application multi group source qualifier. just give an example for a better understanding We can use a MQSeries SQ when we have a MQ messaging system as source(queue).When there is need to extract data from a Queue, which will basically have messages in XML format, we will use a JMS or a MQ SQ depending on the messaging system. If you have a TIBCO EMS Queue, use a JMS source and JMS SQ and an XML Parser, or if you have a MQ series queue, then use a MQ SQ which will be associated with a Flat file or a Cobal file.
Q.In a sequential Batch how can we stop single session?
A.We can stop it using PMCMD command or in the monitor right click on that perticular session and select stop.this will stop the current session and the sessions next to it.
A.We can stop it using PMCMD command or in the monitor right click on that perticular session and select stop.this will stop the current session and the sessions next to it.
Q.Can you start a session inside a batch individually?
A.We can start our required session only in case of sequential batch.in case of concurrent batch we cant do like this.
A.We can start our required session only in case of sequential batch.in case of concurrent batch we cant do like this.
Q.What is a view? How it is related to data independence?And what are the different types of views,and what is Materialize view
A.view is a combination of one or more table.view does not stores the data,it just store the query in file format.If we excutes the query the query will fetch the data from the tables and just make it to view for us. Types views materilized view
A.view is a combination of one or more table.view does not stores the data,it just store the query in file format.If we excutes the query the query will fetch the data from the tables and just make it to view for us. Types views materilized view
Q.What are various types of Aggregation?
A.Various types of aggregation are SUM, AVG, COUNT, MAX, MIN, FIRST, LAST, MEDIAN, PERCENTILE, STDDEV, and VARIANCE.
A.Various types of aggregation are SUM, AVG, COUNT, MAX, MIN, FIRST, LAST, MEDIAN, PERCENTILE, STDDEV, and VARIANCE.
Q.What is mystery dimention?
A.using Mystery Dimension we are maitaining the mystery data in ur Project.
A.using Mystery Dimension we are maitaining the mystery data in ur Project.
Q.what is the look up transformation?
A.Using it we can access the data from a relational table which is not a source in the mapping.
For Ex:Suppose the source contains only Empno, but we want Empname also in the mapping.Then instead of adding another tbl which contains Empname as a source ,we can Lkp the table and get the Empname in target.
A.Using it we can access the data from a relational table which is not a source in the mapping.
For Ex:Suppose the source contains only Empno, but we want Empname also in the mapping.Then instead of adding another tbl which contains Empname as a source ,we can Lkp the table and get the Empname in target.
Q.How do you create a mapping using multiple lookup transformation?
A.Use unconnected lookup if same lookup repeats multiple times.
A.Use unconnected lookup if same lookup repeats multiple times.
Q.How can we eliminate duplicate rows from flat file?
A.Use Sorter Transformation. When you configure the Sorter Transformation to treat output rows as distinct, it configures all ports as part of the sort key. It therefore discards duplicate rows compared during the sort operation.
A.Use Sorter Transformation. When you configure the Sorter Transformation to treat output rows as distinct, it configures all ports as part of the sort key. It therefore discards duplicate rows compared during the sort operation.
No comments:
Post a Comment