Follow me
Most Recent Post
Upgrade 101
Please read the article posted on Medium using the link below: Upgrade 101
Popular Posts this year
-
SQL Language (as well as MOCA) has steadily improved adding a host of new functions that can make many of the reporting requirements quite ...
-
RedPrairie/ JDA introduced Hooks in MOCA Toolset. The Hook being described here is referred as the “QueryHook” (See MOCA Developer Guid...
-
Generally speaking, in my project experience in the JDA (RedPrairie) domain for over 20 plus years, one commonality exists: the fear o...
Wednesday, September 12, 2018
Monday, August 6, 2018
Customization in JDA (RedPrairie) – A fearful proposition for many?
Generally speaking,
in my project experience in the JDA (RedPrairie) domain for over 20 plus years, one
commonality exists: the fear of customizing the software to meet the
operational efficiency that one may desire. Often these customizations are
pushed to JDA with the hopes of getting incorporated into standard product, yet
most of the time it is a struggle to get past this hurdle and it is accepted as
a necessary evil. The fear is generally emanating from previous failed attempts
in such customizations or the misconception that any customization is taking
the client away from the upgrade path.
In this blog entry, my goal is to address this concern. As long as user's customizations are done in a bolt-on manner allowing the base JDA code to
be preserved., the client's upgrade path is not put in jeopardy. The
customizations can include changes to RDT screens, or adding new RDT screens, or changes to standard behaviors to allocation. picking or putaway and still remain on the upgrade path.
Again, the purpose of this entry is not to advocate rampant customizations. The only objective is that an operational efficiency not be sacrificed, simply for the unfounded fear of customization.
The analogy I can think of is that I went and got a nice car. It suits me well, has all the gadgets I need etc. except for one limitation, that it has high performance tires. I happen to live in Wisconsin so I am left with two options:
- Not drive it for the 4 winter months
- Or, get a different car for winter
Many customers will treat their JDA purchase in this manner and be willing to adopt a manual approach or a different system to handle a shortcoming rather to change the tires. To add insult to injury they would somehow think that getting all weather tires from the dealership at twice the price will somehow insure the warranty of the car.
Understanding the JDA Application Architecture
JDA Changes are configuration based, database changes, or code changes. At all of these levels, following the correct methodology allows the client to preserve their changes during an upgrade migration.
- Configuration Changes - Almost all configurable changes (like field names, messages, validations, lookups etc.) have a customization level. Having this customization level to be non zero (generally 10 or above), ensures that JDA hot-fix will not replace this change.
- Database Changes - Many times, a change requires adding a custom column to a standard table, or adding a custom table. As long as the column names are prefixed with UC and table names, with USR (the convention provided by JDA), any new version will preserve these during an upgrade. Furthermore, if the user followed the convention of preserving the changes in a file in the $LES/db/ddl folders then these scripts can be run again after the new data dictionary is loaded. For example, if a client has added custom columns to PCKWRK in 2009 version and is now moving to a higher version where PCKWRK is split into two tables, the upgrade scripts cannot determine which tables the columns should be moved to. But the user can easily edit their scripts in the $LES/db/ddl folder to reflect the PCKWRK_HDR or PCKWRK_DTL and run them again (and make corresponding changes to the code).
- Command Based Changes - Changes done at command level, when done correctly are the easiest to preserve because the $LES/src/cmdsrc/usrint directory is preserved during an upgrade process. The key is how the user (or partner) made the changes:
- A change can be complete custom solution, like a new DDA, a new report etc. This type of change has no dependency on an upgrade except to ensure that it is needed and the data model is consistent
- A change that is an override to a standard command, like adding some custom business logic to allocation etc. This is where the correct methodology comes into play. The client should be mindful of the following red flags:
- A change to standard command being done at the C/ Java code level - If the partner is providing new C/ Java code and pointing your command to a new var based component, beware. You are now not on the upgrade path. The client should challenge this type of change and ask if there is another option
- A change to a standard RDT screen is being done by copying the standard code to VAR and modifying it. Again, as a client, you are not on the upgrade path. Challenge this and find out why the given change cannot be done as an override?
- Re-purpose standard database columns for some other use. A client often chooses to, or is advised to utilize a standard column (like REVLVL) for a change that the client is implementing, and not add a custom column for this purpose. Beware, that the next upgrade may not have that column available or has a different use for this, making your change not compatible anymore.
- Triggers on standard commands - Changes may be made by adding business rules on standard commands utilizing custom triggers. As long as the custom triggers are at a higher firing sequence and have a custom name, the upgrade path is preserved. Many times, a change is needed to a standard trigger. This should always be done by disabling the standard trigger and then adding a custom trigger. Clients, beware, that if the change is not being done in this manner, you are going away from the standard methodology.
- Integration Changes - These changes are more common than others, as clients usually have little choice in this. However, I see the most non-standard practices here, causing the clients undue effort during an upgrade. So client, beware, that if you end up with any custom events where a standard is available (like a custom INV-ADJ, or a custom SHIP_LOAD), as why? Generally speaking the customizations are only needed in Retrieve methods, or having some algorithms etc.
Need of Customization
As stated above, one can have a varying perspective as to
what is a customization in JDA. The
author’s perspective is that any change that takes the application away from a
possible upgrade in the future needs to be evaluated heavily for the merit of
the given change. Beyond this, as long
as base JDA code remains in play, then extending of the code, database, or
configuration should only be merited against the operational cost of the work-around
that is requiring the change. For
example:
- JDA does not offer any column in the “Pre-Plan Summary” for Wave Operations to select orders by carrier move. If the customer is doing transportation management, this can be a valuable field to display on the pre-plan summary screen
- The change will simply be to override the list command
- The override should call the original command and then add a new select to get the carrier move and re-publish the result set
- The screen will start to display the additional column because it is in the result set
- The code is upgrade proof because the standard command is still what returns the main data set
- JDA offers the capability to write triggers on existing commands. The business requires an additional log entry for inventory movement under specified conditions
- JDA Standard Inventory Deposit Screen needs to show an additional set of information. In the post 2011 version of JDA the RDT screens are Java based, hence can be extended without having to rewrite the entire RDT screen. Thus the base RDT screen still remains pristine and additional information, or method is called as custom
- A given SQL statement is not optimized. A Query hook can be added, to rewrite the query before it gets to the middleware and hence remain upgrade proof (please see the Blog entry on this topic)
The above discussion was only focused on business reasons to
extend standard functionality. More
often we need entirely custom solutions, like a new DDA, a new RDT screen
etc. Generally speaking any
modification that is needed in JDA is being built to optimize a warehouse
operation. Thus the existing set of
standard commands may still remain in play, except the business case is to
optimize the multiple set of commands.
For example:
- JDA does not offer a standard transaction to change the Item on existing inventory. The standard option is to adjust out the inventory of the first item and then adjust in the inventory for the second item. This is the preferred mechanism as the process will fire the requisite host transactions to keep the host in balance. The customization in this case, be it a DDA or RDT screen would simply be to take the input from the user (like the LPN and the new item) and then do the two transactions in succession. The end result, even though considered a customization based on being a new screen is calling standard functions underneath
- JDA does not offer a standard transaction for automatically loading and shipping a trailer. However, often for customers that do inter-company transactions, this becomes a requirement. The goal is to load and dispatch a trailer so an inbound ASN can be generated to receive the inventory in the other plant. The inventory physically does not need to be loaded. In this case again, having a DDA that can do the load inventory, close trailer, dispatch trailer transactions in succession allows to achieve the end result and convenience for the users
The business cases discussed above should allow the reader
to appreciate the concept that given needs for extending the business logic in
the JDA application are often simplistic in nature with high return value in
operational efficiency. Furthermore, any
of the business cases discussed above, though “Custom”, will remain upgrade
proof as they are calling standard commands underneath.
Types of Customizations
This leads into what types of customization terminology is
applicable in the JDA domain. Please
consider that the following is a list based on author’s perspective and how the
custom work is being defined here. A
given enterprise may still differ on specifics of the definition of custom work
but may agree on the scope of the list below:
- Extending Business logic
- Creating a wrapper command that calls on or multiple standard commands in succession
- Creating a custom command for custom DDAs, Reports, and Labels etc.
- Overriding Existing Commands
- Override to change the inputs to a standard command to effect the needed outcome
- Override the command to change the result set of the command
- Override to change entirely the business logic (not recommended), but allow the calling screens etc. to still function the same way
- Adding Custom Triggers
- Allow for additional logging
- Allow for custom
- Adding custom Tables/columns
- To track custom attributes
- To track custom
- Extending standard
- Adding custom
- Adding custom Data Driven Applications (
- Adding custom .NET screens for more complex screen
- Extending standard RDT screens
- Adding new methods, or extending existing ones like a new behavior on Form Entry
- Adding new
- Adding new RDT
- Adding new Reports and
- Adding new
- Adding new Valid Lists
All of the above mentioned changes to the standard delivered
JDA application may be intimidating to a customer and fears start to set in around
cost and possible future upgrades. Yet,
when done correctly, the cost is justified by the ROI on the operational
efficiency and the fear of an upgrade mitigated by employing the correct
methodology.
Conclusion
The JDA application architecture with its flexibility of
screen architecture, Java attributes in the RDT screens, and the entire MOCA
architecture flexibility with overriding levels of command structure allows for
extending the business application. The
notion that no two warehouses are alike bears strong resonance with this
application architecture. While controls
are needed, and needless customizations avoided, never should a path be taken
that compromises on operational efficiency in lieu of a “Customization”. Simply the notion that a given change
requires some code to be written vs. going to a screen and turning a flag on/
off should not be the differentiator between a customization and configuration.
“In a tailor-made service, you fit the cloth according to
the size and taste of the customer; not just the taste and strength of the
designer.”
Tuesday, July 31, 2018
JDA 2017 UI - The Good, The Bad and The Ugly
JDA'2 2017 version and its GUI rendering is leaps and bounds ahead of previous versions. Both in its presentation and its capabilities. However, this dramatic transformation comes at a huge learning curve, especially for those converting from previous UI experience.
For me, even the conversion to version 8.2 UI was a challenge (not able to find the Exit button anymore). However, the 2017's Web Based UI is a complete transformation in experience and usability.
In this Blog entry, I will share my experience (not to say it is the final word), and hope that it can be a conversation point to invite comments and feedback to further enrich this entry. Also note that the goal of this entry is not to challenge the vision behind 2017. I am of strong belief that this change is for the better and user adoption will attest to this.
For me, even the conversion to version 8.2 UI was a challenge (not able to find the Exit button anymore). However, the 2017's Web Based UI is a complete transformation in experience and usability.
In this Blog entry, I will share my experience (not to say it is the final word), and hope that it can be a conversation point to invite comments and feedback to further enrich this entry. Also note that the goal of this entry is not to challenge the vision behind 2017. I am of strong belief that this change is for the better and user adoption will attest to this.
The Good:
I must say that JDA has invested quite a bit into the 2017 product. To go from old thought process and to rethink into this new version, one must be humbled at the sheer magnitude of the endeavor. So what did JDA get right in this release? Again a disclaimer that I have not fully gone through this version so I will continue to edit this article as I dig deeper.- WEB Availability - The Portal version is slick, crisp and fast. There is no need for any components to be downloaded and the user experience is rich with many available widgets, especially the embedded charts in almost every screen
- UI Transformation - I personally like the way this version's UI is put together. Someone moving from version 8.2, would find it less of a change. From previous versions, this may be a tough learning curve. The User's 360 view approach as Dashboards, to WMS functions is revolutionary
- Integrated Dashboards - This version has been envisioned with the end user in mind. Thus the UI is rich with dashboards, charts and hyperlinks to allow the user to continue to drill down as needed
- Data Approach - The UI is envisioned with Data first. The user is presented with the data based on the function the user initiated. The user then refines the presented data as needed
- Switch between Warehouses - For Multi-Warehouse installations, the ability to quickly switch between warehouses is nice and efficient
- Switch between Workstations - This has been a limiting factor in previous versions where a Workstation was tied to a printer and if the Workstation needs to act as a different role, it was quite difficult to switch it. In this version, you can simply switch to a new workstation and adopt all the features of that
- Complete Knowledge Base in Help - The Help Icon provides rich information that is very relevant and allows to bridge the learning gap quickly
- Page Templates Easy to Develop - The Page Templates (substitute for DDAs), are quick to develop and provide a quick and rich experience
- Re-Envisioning Trace - The new Tracing paradigm is a better approach and allows to trace the needed functions only by keeping the Tracing Window on top and allowing to interact with the application in the background
- Double Pipe (||) is still supported - The Double pipe to provide a list of limiting values in search is still support. Yeah!
- Dashboards with Auto Refresh Option - Most Dashboards and Inquiry screens have the Refresh Option where the user can choose the number of seconds for Auto-Refresh
- Relevant Data in Menu Options - This is quite a nice feature. For applications that will bring up data (like Inventory Issues as an example) the menu will show the user what record count to expect when the screen is launched. Neat!
- RF in General Stays the Same - This is a saving grace because of the learning curve involved with GUI, that most of the user population in a warehouse, especially in case of an upgrade, will remain largely unaffected
- RF Receiving without Receipt Option - I know, that the Blog is about UI, but wanted to say Kudos for this new RF screen. Great for bringing in inventory as receiving (and not to do through Inventory Adjust)
- Data Model is very Similar - Data model, other than the changes needed around Locations/ Area is largely unaffected. This allows for creation of conversion scripts that can effectively convert an existing setup into 2017 model if needed (See Oracular's approach to an accelerated technical 2017 upgrade - Coming Soon!)
The Bad:
Disclaimer again: My enumeration of the following is based on my experience with the application and again as a Functional User. This in no way is meant to take away from a User's experience as mentioned above in the Good Features of this version.
The features I am counting in "The Bad" list are features that make the application setup quite cumbersome. Furthermore, even as an advanced user, some of the features will start to be a nag, especially if the user was comfortable in the previous versions with using the advanced features of the Grids etc.
- Nested Setup Screens - The setup requires a Functional User to go through nested hyperlinks for various setups. This makes the setup part quite laborious. In contrast if we can look back at the "Area Maintenance" with various tabs, it would feel like a luxury.
- No Criteria Fields on Inquiry Screens - The new UI Paradigm is to use the search box for limiting your data. A new user will find the absence of a "Find" button quite unsettling. Utilizing the search box, seems to be a tedious way of doing searches, especially when a search is with multiple fields. Furthermore, a user can easily make a mistake to pick a wrong field and has to start again
- Criteria Fields and the Ability for AND vs OR - A user can specify multiple criteria by using the search box. If within these, the user needs to provide the condition of OR (instead of AND), the user needs to modify the condition manually. However, this is limiting factor as the logical OR cannot be enclosed in parenthesis (and the expression is evaluated mainly from left to right)
- The Dashboards with Hyperlinks Do a Query First - Any Dashboards that are setup with Hyperlinks will go to a detailed screen and query the data right away. The user needs to use the search box afterwards to limit the data. This can become a problem in a live environment where users continue to click on (as an example open orders) and keep bringing up large amounts of data (A little bit of saving grace does exist that the grids are designed to only bring back 10 records initially)
- Dashboards with Auto Refresh Option - Most Dashboards and Inquiry screens have the Refresh Option where the user can choose the number of seconds for Auto-Refresh, which can be an issue in a live environment if a user sets up a refresh on a badly formulated query (or search criteria)
- Server Trace File Name - I have not found a way to name the server trace files to my specifications (If someone has found away, please feel free to comment and I will remove it from this list)
- Lack of Multiple-Screens - Any screen a user navigates to, replaces the existing one. The user needs to make a conscious decision to invoke a screen in a new tab to achieve the multi screen/tab effect. This can cause issues when trying to compare data between multiple screens
- New Naming Conventions - JDA has opted to change most of the naming conventions in this release. This can be a huge learning curve for Upgrade Users. The following is a list of commonly used ones where it will be challenging to relate to these terms. There are others that are changed simply because of the new approach to things (like Pick Zones, Storage Zones, Location Types etc.) that need to be adapted to because it is a new version:
- LPN for Load/ Pallet
- Sub-LPN for Case
- HU for Asset (SAP users, watch out for confusion)
- Equipment for Trailer
- Inbound Shipments for Receipts
- Loads for Carrier Mobe
- Lack of Grid Features as compared to Previous Versions - Some of the Grid Features like saving your own views, Summary/ Grouping, and Grid Colors are not available. Sorting is also a two step process, ability to see the native value of a code is not available etc. Furthermore, the grid can show maximum of 150 records at a time. Depending upon the screen, this may end up to be a limiting factor
The Ugly:
Mainly, I had to use this for reference to the old Western with the same name. However, my classification for this list is where I am worried, the application's new approach to some data management can lead to issues in a live environment if it is left unchecked. So mainly a word of caution for the reader that this list is comprised of items that I see as a potential to do harm, and not necessarily bad by themselves:
- Location Mass Update (and other Mass Updates) - This is a necessary screen for doing a lot of setup, especially because many setup screens are nested in nature, so you would need to come back to Locations to do updates again. However, I am concerned that the way one enters a screen from the menu options (Like Storage Locations), can lead to potential update of more locations than needed. A user must first further filter out the unneeded locations before applying the change, rather than starting with the correct set of locations
- Switch between Fat Client and Web Client - Since all screens are not converted to Web, JDA has taken the approach to not provide the Web screens on Fat Client. This means a user needs to jump between two UIs for doing their job. So not a huge concern, but can be for a specific role like Pack Station User
- Data not refreshed quickly - I have noticed on some screens (especially nested ones), that the data is not refreshed quickly. In some cases I had to reconnect to see data changes. Could be a quirk in the Web Layer, but can be troublesome if the user cannot identify quickly what the issue is
- Search Box as the Only Search Tool - This one I listed above in the Bad list, but I am concerned mainly base don my experience with previous versions and users ability to interact with the application. While I can save my criteria and use them, the interaction is tedious based on entering a value and then searching from the list to apply this value to. While experienced users will take advantage by continuing to type their search, a novice user will be spending more time with this interaction. This one, I am hoping I can move from this list to the Bad or even the Good features list, its just too soon to tell (Please leave feedback for your own experiences)
Conclusion
It is too soon to draw any conclusions on this product and its UI. As stated in the beginning, when I moved to version 8.2, I was missing my Exit button and hated the idea of all tabs to go from left to right. I remember when RedPrairie first introduced the Multi-Warehouse in its dictionary and how many bugs were introduced in the product as a result. So I must say that this version, given the amount of changes there are, in UI, in Dictionary, and in Commands, that this is a very stable product. The UI features, I am chalking off to learning curve and my own prejudices of previous versions.
All in all, I am hoping to get some feedback on this to see how others feel about this and hoping to alter my list based on all your feedback.
Tuesday, May 15, 2018
MERGE/ USING - A beneficial SQL Statement
Often times in our JDA commands, we are faced with doing an INSERT, UPDATE or DELETE based on data in a table as it relates to another. For example, I am loading Inventory Data into a transaction table to be posted for polling. I possibly have the following scenarios:
- If my Inventory record exists in the destination table, then update it
- If the record does not exist, then insert it
- Possibly, if the record exists, but my inventory is in a different status (depends on specific use case), then delete it from the destination
To implement this logic, a MOCA Command would be something like this (pseudo code):
[select data from destination table where pk=@lodnum] catch(-1403)
|
if (@?=0)
if (@data has wrong status)
perform my delete of destination table
else
perform my update of destination table
else
perform my insert of the destination table
This in itself is not difficult to follow from coding perspective. However, we do have three different sets of code branches to manage and maintain. The MERGE/USING statement (both SQL and Oracle support it with slight differences in usage and syntax) allows for all three possibilities to be encompassed in one statement.
The pseudo syntax on MERGE/USING is as follows. The statement was first introduced in SQL 2008 and Oracle 9i. For specific syntax and usage as applicable to your needs and database, please consult the relevant Database Documentation.
MERGE <destination table>
USING <source table> /* Can be a select */
ON ( key columns )
WHEN MATCHED <and extra clauses>
THEN UPDATE....
WHEN NOT MATCHED <BY TARGET or SOURCE>
THEN INSERT....
You only have options of MATCHED and NOT MATCHED with extra qualifiers to implement the possibility of a DELETE statement. Also note that UPDATE and DELETE statements can be provided at the same time under the MATCHED clause.
The benefit is that using one SQL statement, all three DML operations can be performed. From performance standpoint, we have one cursor open that is traversing the source table and doing operations in the destination table in one go. Rather than doing multiple open/ close cursors inside a loop.
Please note that MERGE statement is optimized for merging a SET of data and not for Single Row operations. In a multi-row SET, the performance gain of a MERGE/USING statement can be almost 15 times, that of doing the same operations as INSERT/UPDATE. So it is best to use it when you have a need of a larger set of data to be merged with another table. Also note that the entire MERGE operation is one DML statement and hence one COMMIT context.
Hope you can find a use case appropriate for this statement.
Tuesday, May 1, 2018
Utilizing MOCA Query Hooks
RedPrairie/ JDA introduced Hooks in MOCA Toolset. The Hook being described here is referred as
the “QueryHook” (See MOCA Developer Guide for details). This feature allows to look for a given SQL
before it is sent to the database engine and manipulate the SQL itself or
possibly its result set.
Caution: As mentioned in the MOCA Developer’s guide also,
“This hook should be very heavily evaluated before its usage is
implemented”. As certainly a wrong
implementation can have unpredictable results.
Oracular has introduced a solution that simplifies this
process and allows a client’s development team to change an SQL at run-time to
their advantage. This solution takes the
complexity out of writing a new Java method for the QueryHook and instead
provides a simple Text File where Query Patterns can be stored.
Why do we need to change SQL this way?
JDA MOCA platform allows great flexibility in providing
mechanism for overriding standard functionality and still keeping the upgrade path
open. However, many times the SQL
performance of a given query requires it to be changed. Introducing an index may not yield the
performance plan that one is looking for and the only viable option is to be
able to rewrite the query.
If the Query happens to be in a Local Syntax command, the
user can easily copy the command to an LES folder and change the SQL to their
liking. In doing so, they have closed
the door to this command being fixed by JDA (as this is a complete override),
but the solution and performance goals have been achieved.
Many times however,
the SQL that needs to be tuned is inside some C/ Java program or part of RDT
code. These SQL commands can only be
manipulated using the QueryHook option.
Oracular's Solution
MOCA QueryHook can be implemented by anyone with knowledge
of Java and they can write their own Moca Hook to take advantage of this.
Oracular provided a simple usage for this for our developers
as well as our clients where a file sitting in $LES\data folder contains the
Query Pattern to look for and its replacement.
This allows for a simple process where a new SQL pattern can be
introduced with much ease.
The following section will explain how to use the Oracular
Solution.
Basic Components
Anyone needing to take advantage of this feature from
Oracular will need the Mocahooks.jar file to exist in their $LES\lib folder and
query_replacements.txt file to exist in their $LES\data folder.
Structure of Query Replacement File
The Query Replacement File has three components for each SQL
that needs to be manipulated:
- Leading edge of the SQL to search for – This must be an exact match. This string does not allow for any regular expressions for white space. So, number of spaces, new line characters etc. are all important to be specified exactly
- Regular Expression String to match the rest of the SQL – This Regular expression allows to match and parse the rest of the SQL. This expression serves two purposes:
o
Fully qualify the SQL
o
Find Groups of data inside
the SQL to be used during the replacement (important concept that will be
explained in the example illustration)
- SQL to replace with – This is the new SQL that is executed instead. The Groups identified in the search can be used in the replacement SQL
The use and need of replacements is generally performance
related. However, it can be used to
manipulate result set if needed.
However, if result set is being manipulated, be sure to verify that
downstream processing can handle any changes to the result set.
Regular Expressions
The search string uses Regular Expressions (Regex) for
searching through the SQL string. Please
note that Regular Expressions are used in many variations and have unique
structures and implementations based on the usage. For our purpose, only the Java Regular Expressions
can be used (not Posix, PHP etc.)
The following are the commonly utilized expressions for our
purpose:
- (?i) - Starting the expression with this, specifies to ignore the case during the search
- (?s) – This modifier allows to ignore New-Line when searching for any character
- If both of the above modifiers are needed then only one ? in the beginning is needed (?is)
- ^ - Asserts position as the start of the string
- \s+ - Look for white space. But at least one. Commonly this will be used before SQL key words as there needs to be at least one space before them (like from or where)
- \s* - Look for white space. None is also OK
- \( or \) – Parenthesis are part of Regex. So if you need to specify a parenthesis in your expression, always precede that with a \
- (.*?) – Look for a group of characters. Any character can match in this and whatever matches becomes a group (see below on using groups). Groups are numbered from left to right. So the first instance of (.*?) is group one ($1), next is Group two ($2) and so one
- .* - Look for any character to match
- Anything else mentioned in the expression is taken as is. So, keywords like select, from, or operators like = etc. are to be specified as is
Please note that this is just a brief list of commonly
needed search operators. Based on your
specific need, you may need to use more.
If you need to validate your expression, you can use the
site below:
In this site you can provide your regular expression string in
the first field and the select statement you are searching for in the next
field and it will verify for you if the match is going to occur or not. Furthermore, it provides all the groups for
you also. See illustration below.
This screen also shows the explanation of your expressions:
Notice Group 1 is what the expressions was looking for first
as (.*) and it is also color coded for your convenience.
Generally speaking the groups are used for all variable data
that may possibly come in the SQL string.
As this data cannot be forecasted ahead of time, so the search string
must be able to ignore it during the search, but still use it later when
rebuilding the new string.
So, in this illustration the group data involves part of the
SELECT statement and the variable data in the WHERE clause.
SQL Statement from the Trace File
It is very important to find the proper statement in the
Trace file for utilization in the QueryHook.
JDA trace files show an SQL statement in three different forms:
- The original form based on the MOCA command sending it
- The XLATE modification where Bind Variables are introduced
- The modified statement set to the SQL engine
For QueryHook purposes, we must utilize the XLATE version of
the SQL. Any other version may not
properly yield a good search pattern and match.
The following is an illustration of how SQL takes different forms in the
trace file:
Original SQL:
Executing SQL: select distinct lodnum, decode('lodnum', 'subnum', subnum,
'dtlnum', subnum, 'x')
subnum, decode('lodnum',
'dtlnum', dtlnum, 'x')
dtlnum from inventory_view where (('lodnum' = 'lodnum' and lodnum =
:var_lodnum) or ('subnum' =
'lodnum' and subnum = :var_lodnum)
or dtlnum = :var_lodnum )
XLATE Version:
XLATE: select distinct lodnum, (case when :q0 = :q1
then subnum when :q0 = :q2 then subnum else :q3 end) subnum, (case when :q4 = :q5
then dtlnum else :q6 end) dtlnum from inventory_view where ((:q7 = :q8 and lodnum =
:var_lodnum) or (:q9 = :q10
and subnum = :var_lodnum) or
dtlnum = :var_lodnum )
UNBIND Version Sent to SQL Engine:
UNBIND: select distinct lodnum, (case when 'lodnum' = 'subnum'
then subnum when 'lodnum' = 'dtlnum'
then subnum else 'x' end) subnum, (case when 'lodnum' =
'dtlnum' then dtlnum
else 'x' end) dtlnum from inventory_view where (('lodnum' = 'lodnum' and lodnum =
'00100538930017771875') or
('subnum' = 'lodnum' and subnum = '00100538930017771875') or dtlnum =
'00100538930017771875' )
Notice that the original version of the statement has a
DECODE in it. Then XLATE converted that
to CASE as this was being passed to SQL Server, and then the UNBIND version
added all the values for bind variables.
Illustration Example
For sake of illustration I will share two examples. In Microsoft SQL, there is a common issue
where any statement with OR does not yield a good query plan. This statement if converted to a UNION
provides a far better execution plan and performance. Hence any such instances of SQL statements
will be good candidates for the QueryHook.
Converting an "OR" to a "UNION"
The SQL mentioned above is the illustration for converting
the OR statement to a UNION. However, to
keep the SQL conversion database independent, the search string did not want to
assume that a DECODE will be converted to a CASE statement. So, the search pattern is as follows:
select
distinct lodnum
$$$$$
(?is)select\s+distinct\s+lodnum\s*,(.*)from\s+inventory_view\s+where\s+\(\(\s*(.*?)\s*=\s*(.*?)\s+and\s+lodnum\s*=\s*(.*?)\s*\)\s+OR\s+\(\s*(.*?)\s*=\s*(.*?)\s+and\s+subnum\s+=\s+(.*?)\s*\)\s+OR\s+dtlnum\s+=\s+(.*?)\s*\)\s*$
$$$$$
select
distinct lodnum,
$1
from inventory_view
where ($2 = $3
and lodnum = $4)
union
select distinct lodnum,
$1
from inventory_view
where ($5 = $6
and subnum = $4)
union
select distinct lodnum,
$1
from inventory_view
where dtlnum = $4
Explanation
- The first line is the pattern that needs to match exactly
o
Notice the number of spaces
between distinct and lodnum
o
This needed to be specified
as this because the XLATE version illustrated above shows in the same manner
o
Hint: Copy the text
directly from the Trace file and place it here
- After the $$$$$ comes the second portion of the QueryHook where the SQL is parsed based on the Regular Expression provided
o
It starts with (?is) to
specify to ignore case during the pattern matching and ignore new line
o
After looking for “select”,
it says \s+ meaning there should be at least one white space but more is OK
o
After “distinct” it does
the same to say \s+ until “lodnum”. Now
that we are using regular expressions, we don’t need to worry about all the
spaces between “distinct” and “lodnum”
o
After that we have \s* till
comma. So, it can be any number of white
space, including none
o
Then comes the first group
(.*). Note that any search criteria when
enclosed in () becomes a group. As mentioned
before, this QueryHook did not need to worry about DECODE vs. CASE, so it is
saying that after the “lodnum,” till “from”, any text is fine. However, I want that text to be in my group
($1 in this case), as I want to use it in my Query Rewrite
o
After this, it looks for
“From”, then “inventory_view”, then “where”
o
After that it starts to
look for more Groups of data, as the XLATE was showing that these are possible
bind variables. We need to make sure
that the query will match, regardless of how the parameters are passed to it.
o
The Search Pattern
terminates with $
- After the next $$$$$ comes the new SQL
o
In this section, you are
allowed to write any valid SQL statement
o
It may or may not have
relevance to the original SQL (as illustrated in the next example)
o
However, in most cases you
will be rewriting the query to yield similar result set, except in a faster
manner
o
So, in this illustration
the OR was turned into a three-way UNION
o
The first part of the UNION
utilizes the first part of the OR, and so on for the next two portions
o
Important part is the
utilization of the groups:
§
Note that $1 was all the
text matched after lodnum, till the “from”.
So, in our rewrite we use it as is
§
The rest of the groups need
to be utilized properly when the UNION is being expanded so the proper group is
utilized where needed
§
Since the original
statement only provides the value for the Load Number first in $4, it gets used
throughout. However, the same statement
could have been written as:
·
Subnum = $7
·
Dtlnum = $8
After this QueryHook is introduced, the Trace Shows the Following:
Converting a statement to a Completely Different One:
Another possibility is that a problem statement due to
performance, needs to be rewritten, but the functionality is not being used in
your respective implementation. So even
though you could go through the exercise of a rewrite, it will still take
precious seconds to execute, yet to produce no benefit.
So, in this case, simply returning a result set from DUAL
yields a faster execution time.
Caution: This use should be utilized with extreme care, as
turning the functionality on in the future will not work as we have changed the
statement completely using the QueryHook
select
distinct trnsp_mode.palletctl_flg
$$$$$
^select\s+distinct\s+trnsp_mode.palletctl_flg\s+from\s+pckwrk\s*,\s*shipment\s*,\s*stop\s*,\s*car_move\s*,\s*cardtl\s*,\s*trnsp_mode\s*,\s*wrkque\s+where\s+shipment.ship_id\s*=\s*pckwrk.ship_id\s+and\s+shipment.stop_id\s*=\s*stop.stop_id\s+and\s+stop.car_move_id\s*=\s*car_move.car_move_id\s+and\s+car_move.carcod\s*=\s*cardtl.carcod\s+and\s+\(wrkque.wrkref\s*=\s*pckwrk.wrkref\s+or\s+wrkque.list_id\s*
=\s*pckwrk.list_id\)\s+and\s+wrkque.wh_id\s*=\s*pckwrk.wh_id\s+and\s+wrkque.wh_id\s*=\s*(.*?)\s+and\s+wrkque.reqnum\s*=\s*(.*?)\s*and\s*trnsp_mode.trnsp_mode\s*=\s*nvl\s*\(car_move.trans_mode\s*,\s*cardtl.cartyp\)$
$$$$$
select 0 as palletctl_flg from dual
Explanation
- The first line shows the predicate to match
- The line after the $$$$$ has the regular expression to match. This is complex SQL so the regular expression is quite long. However, with the knowledge of each search patter it can be understood
- After the next $$$$$ comes the replacement SQL. Notice that this is simply being replaced with a select from DUAL. We did make sure that the correct column is returned so the result set is not compromised for the downstream logic. But how the 0 was returned did not matter and since Pallet Control logic is not being used, this rewrite yielded the best performance in this case.
Conclusion
The purpose of this blog entry was to illustrate to the reader
the important and useful feature of MOCA and the available QueryHook. Even though that caution must be exercised,
this provides a valuable way to fine tune standard SQLs which JDA may not even
be able to provide a fix for since the given SQL may only be working poorly in
your specific case only. This was the
purpose of the two illustrations also, to emphasize that these SQLs would not
be changed by JDA as they are optimized from their perspective.
Monday, April 23, 2018
New SQL Functions to aid in Reporting
SQL Language (as well as MOCA) has steadily improved adding a host of new functions that can make many of the reporting requirements quite easy to handle. In this blog, I will showcase the use of the following functions. Please note that my reference point in this blog is Oracle Database. However, where appropriate I will provide information appropriate for SQL Server.
- LISTAGG (STRING_AGG in SQL)
- PIVOT (Same in SQL)
- NTH_VALUE (no corresponding function in SQL)
- ROLLUP (Same in SQL)
- CUBE (Same in SQL)
- In Memory (Internal) Table in MOCA
I will leave the reader to lookup the exact syntax for these functions. The purpose of this blog is to simply provide possible use cases for these functions and point the reader to the value they provide.
While MOCA language may provide some possibilities into calculating these (like the LISTAGG using column results to string command), there is real value and benefit when the task of producing the correct result set is passed to the SQL engine and thus reduce the complexity of code and improve performance.
LISTAGG:
This function can return a list of values concatenated together within a GROUP BY. Often times, we are challenged to provide all the distinct values within a group, but not list them in multiple rows. For example a use case can be showing all locations from where a shipment is getting picked from. Traditionally I would have a select statement to list all the shipments from PCKWRK. Then have another select to get the SRCLOC, move it to RES and then use the "column results to string" MOCA function to get a result string and publish the entire result set. But using the LISTAGG, I can do all that within one SQL statement. An example is given below for reference:
[select ship_id,
listagg(srcloc, ',') within group (order by srcloc) loclist
from pckwrk_view
group by ship_id]
PIVOT:
A common need in reporting is to show data across in multiple columns, based on a group of data. Again taking PCKWRK as a data set, our need is to show number of picks coming from each area for all shipments. This requires all SRCARE to show as columns in the report. Using the PIVOT function, this becomes quite easy. The following is an example illustrating this:
[select '''' || arecod || ''' as ' || arecod arelist from
aremst a
where pckcod != 'N'
and exists (select 1 from locmst
b where b.arecod=a.arecod and b.pckflg=1)]
>> res
|
convert column results to string
where res = @res and colnam
= 'arelist' and separator = ','
|
[select * from (
select ordnum, srcare, pckqty
from pckwrk_view
where pcksts='R'
and appqty=0
and adddte > sysdate-30)
PIVOT ( count(*) cnt, sum(pckqty) pckqty for srcare in
( @result_string:raw ) )
]
NTH_VALUE:
This is an interesting function which I am struggling to find a good use case for illustration. However, I am sure that someone would have come across a need, so please provide feedback if you find this useful. This function will return the NTH value (first, second...) within a group. So if the need is to show the second value based on the sortation criteria of the group, then this function will be useful. For my illustration I came up with a requirement where I need to show my PCKQTY as a relation to top 2 PCKQTYs in my wave. While the TOP 1, I can easily get as a MAX, the second one is not possible (at least easily) without the NTH_VALUE function. Note the I am purposely using different ORDER BY values to illustrate the benefit:
[select schbat, ship_id cnt, pckqty, pckqty/nv1*100 pct1, pckqty/nv2*100 pct2
from (
select schbat, ship_id, count(*) cnt, sum(pckqty) pckqty,
nth_value( sum(pckqty), 1) over (partition by schbat order by sum(pckqty) desc
rows between unbounded preceding and unbounded following ) nv1,
nth_value( sum(pckqty), 2) over (partition by schbat order by sum(pckqty) desc
rows between unbounded preceding and unbounded following ) nv2
from pckwrk_view
group by schbat, ship_id
having count(*) > 10
)
order by 1,2,4]
ROLLUP:
A common need in reporting is doing Sub-Totals at a secondary group field. While we can make needed design setup in a proper reporting tool, it can be tedious in simple SQL and when doing prototyping. Furthermore, having access to ROLLUP as part of SQL language makes many of the reporting tasks (especially a DDA) quite easy. Staying with the PCKWRK data as example, the following query illustrates getting a partial sum by each SCHBAT and then getting a grand total in the end:
[select schbat, ship_id, count(*) cnt, sum(pckqty) pckqty
from pckwrk_view
where wrktyp='P'
group by rollup(schbat, ship_id)
order by 1,2,4]
CUBE:
Similar to ROLLUP, except it will do all possible combinations of the group fields (2 to n, where n is the number of columns) and provide their totals. The benefit of CUBE comes into play when you have more than 2 columns in the GROUP BY to see different combinations play out. For example in the example below, we are grouping data from PCKWRK for SCHBAT, SHIP_ID and SRCARE. So one set of summation is at SCHBAT, SHIP_ID and SRCARE level, but then CUBE will also show a combination of SCHBAT and SRCARE, and SCHBAT and SHIP_ID as the extra set (ROLLUP was only going to provide SCHBAT and SHIP_ID):
[select schbat, ship_id, srcare, count(*) cnt, sum(pckqty) pckqty
from pckwrk_view
where wrktyp='P'
group by cube(schbat, ship_id, srcare)
order by 1,2,4]
In Memory (Internal) Tables in MOCA:
The use of this data structure for Integrator use is explained quite extensivley in Saad Ahmad's blog (http://saadwmsblog.blogspot.com/2015/12/integrator-in-memory-database-makes.html). However, I wanted to share their use for reporting. The challenge we often face in reporting is that the data being put together does not match in keys and dimensions. Trying to accomplish this feat by joining brings the performance. Furthermore, there is a challenge where the grouping of data across different tables does not match. These are the scenerios where loading data into an Internal Table and then doing Group BY/ Order BY on that data can help solve the problem and achieve the solution with better performance.
For illustratration purposes I will use an example where I need to get the data from Daily Transaction (DLYTRN) table. I am trying to find all movements of inventory for a pallet and device ID, but only show the first from location and the last to location in the report (by Pallet/Device). Also the last location needs to be based on Transaction Date. You can see that simple MIN/MAX on FRSTOL or TOSTOL cannot work here. So the Query had to be broken into multiple Select statements. However, the challenge was then to still show the data organized and grouped in its own dimensions. Using the Internal Tables with GROUP BY and ORDER BY allowed to solve this issue (please note that the following code is for illustration purposes to explain the problem with a working solution):
[select min(trndte) trndte, usr_id, frstol, prtmst_view.dsp_prtnum,
prtmst_view.typcod, dlytrn.lotnum, dlytrn.lodnum, sum(trnqty) trnqty, devcod
from dlytrn
join prtmst_view
on prtmst_view.prtnum = dlytrn.prtnum
and prtmst_view.prt_client_id = dlytrn.prt_client_id
and prtmst_view.wh_id = dlytrn.wh_id
where fr_arecod=@frarecod
and dlytrn.wh_id=@wh_id
and (dlytrn.trndte >= @st_dte:date or @st_dte is NULL)
and (dlytrn.trndte <= @ed_dte:date or @ed_dte is NULL)
and @+dlytrn.usr_id
group by usr_id, frstol, prtmst_view.dsp_prtnum,
prtmst_view.typcod, dlytrn.lotnum, devcod, dlytrn.lodnum]
|
[select tostol tostol
from ( select tolstol from dlytrn
where trndte > @trndte:date
and dlytrn.wh_id=@wh_id
and devcod=@devcod and lodnum=@lodnum
and rownum < 9999
order by dlytrn_id desc )
where rownum < 2
]
|
if (@tostol)
filter data where moca_filter_level = '2' and @+tostol
} >> my_res
|
{
release internal tables
;
create internal table
where table_name = 'uc_my_putaway_detail'
and res = @my_res
|
select from internal table
where select_stmt ="select min(trndte) trndte, usr_id, frstol,
dsp_prtnum, typcod, lotnum,lodnum, sum(trnqty) trnqty, tostol"
and table_name="uc_my_putaway_detail"
and group_by_clause="group by usr_id, frstol, dsp_prtnum,
typcod,lodnum, lotnum, tostol"
and order_by_clause="order by usr_id,trndte"
}
The use of this data structure for Integrator use is explained quite extensivley in Saad Ahmad's blog (http://saadwmsblog.blogspot.com/2015/12/integrator-in-memory-database-makes.html). However, I wanted to share their use for reporting. The challenge we often face in reporting is that the data being put together does not match in keys and dimensions. Trying to accomplish this feat by joining brings the performance. Furthermore, there is a challenge where the grouping of data across different tables does not match. These are the scenerios where loading data into an Internal Table and then doing Group BY/ Order BY on that data can help solve the problem and achieve the solution with better performance.
For illustratration purposes I will use an example where I need to get the data from Daily Transaction (DLYTRN) table. I am trying to find all movements of inventory for a pallet and device ID, but only show the first from location and the last to location in the report (by Pallet/Device). Also the last location needs to be based on Transaction Date. You can see that simple MIN/MAX on FRSTOL or TOSTOL cannot work here. So the Query had to be broken into multiple Select statements. However, the challenge was then to still show the data organized and grouped in its own dimensions. Using the Internal Tables with GROUP BY and ORDER BY allowed to solve this issue (please note that the following code is for illustration purposes to explain the problem with a working solution):
[select min(trndte) trndte, usr_id, frstol, prtmst_view.dsp_prtnum,
prtmst_view.typcod, dlytrn.lotnum, dlytrn.lodnum, sum(trnqty) trnqty, devcod
from dlytrn
join prtmst_view
on prtmst_view.prtnum = dlytrn.prtnum
and prtmst_view.prt_client_id = dlytrn.prt_client_id
and prtmst_view.wh_id = dlytrn.wh_id
where fr_arecod=@frarecod
and dlytrn.wh_id=@wh_id
and (dlytrn.trndte >= @st_dte:date or @st_dte is NULL)
and (dlytrn.trndte <= @ed_dte:date or @ed_dte is NULL)
and @+dlytrn.usr_id
group by usr_id, frstol, prtmst_view.dsp_prtnum,
prtmst_view.typcod, dlytrn.lotnum, devcod, dlytrn.lodnum]
|
[select tostol tostol
from ( select tolstol from dlytrn
where trndte > @trndte:date
and dlytrn.wh_id=@wh_id
and devcod=@devcod and lodnum=@lodnum
and rownum < 9999
order by dlytrn_id desc )
where rownum < 2
]
|
if (@tostol)
filter data where moca_filter_level = '2' and @+tostol
} >> my_res
|
{
release internal tables
;
create internal table
where table_name = 'uc_my_putaway_detail'
and res = @my_res
|
select from internal table
where select_stmt ="select min(trndte) trndte, usr_id, frstol,
dsp_prtnum, typcod, lotnum,lodnum, sum(trnqty) trnqty, tostol"
and table_name="uc_my_putaway_detail"
and group_by_clause="group by usr_id, frstol, dsp_prtnum,
typcod,lodnum, lotnum, tostol"
and order_by_clause="order by usr_id,trndte"
}
Subscribe to:
Posts (Atom)