Getting record set from SQL on Ibm i

167 views Asked by At

So, I am trying to modernize the code on my IBM i, and I am thinking about subfiles and printfiles.

Excluding native I/O opperations, I can think of three ways to get my data to populate using embedded SQL.

  1. Cursor Fetch method
  2. MultiOccurance data structure for the number of records on one page
  3. Much larger MultiOccurance data structure holding multiple pages of data.

What is the best practice method? Any opinions?

2

There are 2 answers

5
Charles On

Possibly off-topic as "primarily opinion based"

But DS array .... the larger the better performing...

0
jmarkmurphy On

I have never backed a subfile with an array, though I suppose it would be a useful technique if you were going to have more than 9999 records. Instead I have always just read into it from a cursor. In fact mixing SQL and procedures makes populating a subfile extremely easy. You can even use a multi-record fetch if it makes sense to you. Here is a simple example (single record fetch):

dcl-proc LoadSubfile;
  dcl-pi *n;
    keyFields       Type;
  end-pi;

  dcl-ds sflin        LikeRec(sfl: *Input) Inz;

  ClearSubfile();
  OpenCursor(keyFields);
  dow FetchCursor(record);
    eval-corr sflin = record;
    PopulateHidden(sflin);
    rrn += 1;
    write sfl sflin;
  enddo;
  CloseCursor();
  rrnMax = rrn;
end-proc;

There are some things here that are not defined, for instance FetchCursor() returns an Indicator = *On if a record is returned. and PopulateHidden() populates hidden fields in the subfile record. I use hidden in an editable subfile to hold original values for fields that can be changed. I define the subfile fields the same as the record fields so that I can do an eval-corr to get them into the IO data structure. I also will check the subfile rrn for overflow if I believe it is possible for there to be more than 9999 records in the database. Then I throw a subfile full message with directions to filter the record set.

Some other things You didn't ask about, but I will tell you since you asked about best practices. Unless there is a reason to avoid it, I use SFLCLR to clear the subfile, and I generally load the entire subfile in one shot unless I suspect that there will be thousands of records. Many of the old optimizations like SFLNXTCHG and loading a single page at a time were put in place because communications were slow. Not so much the twinax communications, but the ASCII workstation controller, or remote workstations which were often on the other side of a communications line much slower than twinax. This is no longer true. I tend to avoid those old hacks meant to preserve bandwidth because they only complicate the code. Write the whole thing at once, and when processing, read through the whole subfile with a for loop.

Also, in the future, these questions should be on Code Review if you have working code that you want to know about best practices for, or maybe Software Engineering if you are more interested in theoretical answers. Stack Overflow is more for answering specific objective questions.