Follow me

Most Recent Post

Upgrade 101

Please read the article posted on Medium using the link below: Upgrade 101

Popular Posts this year

Monday, April 23, 2018

New SQL Functions to aid in Reporting

SQL Language (as well as MOCA) has steadily improved adding a host of new functions that can make many of the reporting requirements quite easy to handle.  In this blog, I will showcase the use of the following functions.  Please note that my reference point in this blog is Oracle Database.  However, where appropriate I will provide information appropriate for SQL Server.

  • LISTAGG (STRING_AGG in SQL)
  • PIVOT (Same in SQL)
  • NTH_VALUE (no corresponding function in SQL)
  • ROLLUP (Same in SQL)
  • CUBE (Same in SQL)
  • In Memory (Internal) Table in MOCA
I will leave the reader to lookup the exact syntax for these functions.  The purpose of this blog is to simply provide possible use cases for these functions and point the reader to the value they provide.

While MOCA language may provide some possibilities into calculating these (like the LISTAGG using column results to string command), there is real value and benefit when the task of producing the correct result set is passed to the SQL engine and thus reduce the complexity of code and improve performance.

LISTAGG:
This function can return a list of values concatenated together within a GROUP BY.  Often times, we are challenged to provide all the distinct values within a group, but not list them in multiple rows.  For example a use case can be showing all locations from where a shipment is getting picked from.  Traditionally I would have a select statement to list all the shipments from PCKWRK. Then have another select to get the SRCLOC, move it to RES and then use the "column results to string" MOCA function to get a result string and publish the entire result set.  But using the LISTAGG, I can do all that within one SQL statement.  An example is given below for reference:


[select ship_id, 
  listagg(srcloc, ',') within group (order by srcloc) loclist
 from pckwrk_view
 group by ship_id]


PIVOT:
A common need in reporting is to show data across in multiple columns, based on a group of data.   Again taking PCKWRK as a data set, our need is to show number of picks coming from each area for all shipments.  This requires all SRCARE to show as columns in the report.  Using the PIVOT function, this becomes quite easy.  The following is an example illustrating this:

[select '''' || arecod || ''' as ' || arecod arelist from aremst a 
  where pckcod != 'N' 
  and exists (select 1 from locmst b where b.arecod=a.arecod and b.pckflg=1)]
   >> res
|
convert column results to string 
 where res = @res and colnam = 'arelist' and separator = ','
|
[select * from (
select ordnum, srcare, pckqty
from pckwrk_view
where pcksts='R'
and appqty=0
and adddte > sysdate-30)
 PIVOT ( count(*) cnt, sum(pckqty) pckqty for srcare in ( @result_string:raw ) )
]

NTH_VALUE:
This is an interesting function which I am struggling to find a good use case for illustration.  However, I am sure that someone would have come across a need, so please provide feedback if you find this useful.  This function will return the NTH value (first, second...) within a group.  So if the need is to show the second value based on the sortation criteria of the group, then this function will be useful.  For my illustration I came up with a requirement where I need to show my PCKQTY as a relation to top 2 PCKQTYs in my wave.  While the TOP 1, I can easily get as a MAX, the second one is not possible (at least easily) without the NTH_VALUE function.  Note the I am purposely using different ORDER BY values to illustrate the benefit:


[select schbat, ship_id cnt, pckqty, pckqty/nv1*100 pct1, pckqty/nv2*100 pct2
from (
select schbat, ship_id, count(*) cnt, sum(pckqty) pckqty, 
   nth_value( sum(pckqty), 1) over (partition by schbat order by sum(pckqty) desc
      rows between unbounded preceding and unbounded following ) nv1,
   nth_value( sum(pckqty), 2) over (partition by schbat order by sum(pckqty) desc
      rows between unbounded preceding and unbounded following ) nv2
from pckwrk_view
group by schbat, ship_id
having count(*) > 10
order by 1,2,4]

ROLLUP:
A common need in reporting is doing Sub-Totals at a secondary group field.  While we can make needed design setup in a proper reporting tool, it can be tedious in simple SQL and when doing prototyping.  Furthermore, having access to ROLLUP as part of SQL language makes many of the reporting tasks (especially a DDA) quite easy.  Staying with the PCKWRK data as example, the following query illustrates getting a partial sum by each SCHBAT and then getting a grand total in the end:

[select schbat, ship_id, count(*) cnt, sum(pckqty) pckqty
from pckwrk_view
where wrktyp='P'
group by rollup(schbat, ship_id)
order by 1,2,4]

CUBE:
Similar to ROLLUP, except it will do all possible combinations of the group fields (2 to n, where n is the number of columns) and provide their totals.  The benefit of CUBE comes into play when you have more than 2 columns in the GROUP BY to see different combinations play out.  For example in the example below, we are grouping data from PCKWRK for SCHBAT, SHIP_ID and SRCARE.  So one set of summation is at SCHBAT, SHIP_ID and SRCARE level,  but then CUBE will also show a combination of SCHBAT and SRCARE, and SCHBAT and SHIP_ID as the extra set (ROLLUP was only going to provide SCHBAT and SHIP_ID):

[select schbat, ship_id, srcare, count(*) cnt, sum(pckqty) pckqty
from pckwrk_view
where wrktyp='P'
group by cube(schbat, ship_id, srcare)
order by 1,2,4]

In Memory (Internal) Tables in MOCA:
The use of this data structure for Integrator use is explained quite extensivley in Saad Ahmad's blog (http://saadwmsblog.blogspot.com/2015/12/integrator-in-memory-database-makes.html).  However, I wanted to share their use for reporting.  The challenge we often face in reporting is that the data being put together does not match in keys and dimensions.  Trying to accomplish this feat by joining brings the performance.  Furthermore, there is a challenge where the grouping of data across different tables does not match.  These are the scenerios where loading data into an Internal Table and then doing Group BY/ Order BY on that data can help solve the problem and achieve the solution with better performance.

For illustratration purposes I will use an example where I need to get the data from Daily Transaction (DLYTRN) table.  I am trying to find all movements of inventory for a pallet and device ID, but only show the first from location and the last to location in the report (by Pallet/Device).  Also the last location needs to be based on Transaction Date.  You can see that simple MIN/MAX on FRSTOL or TOSTOL cannot work here.  So the Query had to be broken into multiple Select statements.  However, the challenge was then to still show the data organized and grouped in its own dimensions.  Using the Internal Tables with GROUP  BY and ORDER BY allowed to solve this issue (please note that the following code is for illustration purposes to explain the problem with a working solution):

[select min(trndte) trndte, usr_id, frstol, prtmst_view.dsp_prtnum,
   prtmst_view.typcod, dlytrn.lotnum, dlytrn.lodnum, sum(trnqty) trnqty, devcod
from dlytrn 
 join prtmst_view 
on prtmst_view.prtnum = dlytrn.prtnum 
and prtmst_view.prt_client_id = dlytrn.prt_client_id 
and prtmst_view.wh_id = dlytrn.wh_id
where fr_arecod=@frarecod 
and dlytrn.wh_id=@wh_id
and (dlytrn.trndte  >= @st_dte:date or @st_dte is NULL)
and (dlytrn.trndte  <= @ed_dte:date  or @ed_dte is NULL)
and @+dlytrn.usr_id
group by usr_id, frstol, prtmst_view.dsp_prtnum, 
  prtmst_view.typcod, dlytrn.lotnum, devcod, dlytrn.lodnum]
|
[select tostol tostol
from ( select tolstol from dlytrn 
       where trndte > @trndte:date
       and dlytrn.wh_id=@wh_id
       and devcod=@devcod and lodnum=@lodnum
       and rownum < 9999
       order by dlytrn_id desc )
where rownum < 2
]
|
if (@tostol)
  filter data where moca_filter_level = '2' and @+tostol
} >> my_res
|
{
  release internal tables
  ;
  create internal table
   where table_name = 'uc_my_putaway_detail'
   and res = @my_res
  |
  select from internal table
   where select_stmt ="select min(trndte) trndte, usr_id, frstol,
     dsp_prtnum, typcod, lotnum,lodnum, sum(trnqty) trnqty, tostol"
   and table_name="uc_my_putaway_detail"
   and group_by_clause="group by usr_id, frstol, dsp_prtnum, 
                                 typcod,lodnum, lotnum, tostol"
   and order_by_clause="order by usr_id,trndte"
}



Wednesday, April 18, 2018

Using Oracle's Flashback Query Concepts in JDA/RedPrairie Implementations

Oracle offers a very powerful concept through its Flashback Technology (https://docs.oracle.com/cd/B28359_01/appdev.111/b28424/adfns_flashback.htm#g1026131).  This article is solely focused on how to take advantage of this in your JDA/ RedPrairie implementations that are on Oracle Databases.

A common problem that all of us face in triaging a JDA/ RedPrairie issue is how to recreate it.  Most of the issue post-production are data related and recreating them is tedious, especially in production.  It could be how a pick release did not function as expected, a list generated incorrectly, a hop criteria not fulfilled or a a number of other business rules that are solely data dependent and change state very quickly in the course of transactions in the JDA/ RedPrairie system. 

How often do we wish to have a picture of the data, just before the problem occurred.  We resort to looking at Daily Transactions for clues or simply put in new data to see if the issue would re-occur and hope that the trace will capture it.

Welcome to Oracle's Flashback Technology.

Using something as follows is actually possible:

[select * from pckwrk_view 
 AS OF TIMESTAMP TO_TIMESTAMP ('2018-04-20 12:00:00', 'YYYY-MM-DD HH24:MI:SS')
 where ship_id='MYSHIPMENT']


  • The above query has the additional criteria for AS OF TIMESTAMP
  • This Date/Time will be in the past to see the state of data as of that date
  • The rest of statement will allow for normal WHERE criteria etc.
  • Since it is MOCA, you are allowed to pipe into compound statements all, using same AS OF TIMESTAMP to see data from multiple tables (like PCKMOVs for the above PCKWRKs)
I am sure, by this time, excitement level is building and ideas coming to mind as to how to utilize this feature.  Just be mindful that this is not something that has an unlimited range.   Based on how you set this up, the window of your historic data viewing may be limited.  However, knowing that this option is available and it can be setup to my specifications is a blessing nonetheless.

The above article from Oracle explains in full detail how to use this and how to set this up.  I will encourage all to utilize this and make sure that if you are using Oracle as the database, to turn this feature on.

Some other benefits that can be achieved:
  • Recovering deleted data (or updated to incorrect values)
  • Recreating an issue
  • Recovering Dropped Tables
  • Creating a chronological view of changes to data (great for auditing or providing some type of Audit Trail to someone) - This will employ the VERSIONS BETWEEN SCN feature
  • Creating a Flashback Data Archive - No longer do we need to add code for tracking transactional changes to tables for Audit reasons
    • A Flashback Data Archive is useful for compliance and audit reports
    • Possible post-go-live and during Hypercare, many critical transactional tables can be in this archive so longer data retention is available
    • Options such as QUOTA and RETENTION (time limit) can be specified for how long this data is retained
In today's world where disk is cheap, I would encourage all to utilize this feature to the best possible way.  I am sure once it is enabled, there will be a time when you would be glad that you had it enabled.  The features have been available since Oracle 9i.  However, the technology has improved from being solely a DBA function to be available as a User function (based on your rights).  So please investigate your database version to see what extent of features are available for your implementation.  

Also, at this time, I have not found anything similar in SQL Server.