>>--CHAROUT(--+------+--+---------------------------+--)------->< +-name-+ +-,--+--------+--+--------+-+ +-string-+ +-,start-+
returns the number of single-byte characters remaining after attempting to write string to the character output stream name. If you omit name, characters in string are written to the default output stream. The string can be the null string, in which case no characters are written to the stream, and 0 is always returned.
For variable-format streams with the TEXT option, the LINEEND character must be supplied to indicate the end of the record. For fixed-format streams with the TEXT option, the LINEEND does not have to be given, as the data will be split at the appropriate record length. If a LINEEND character is given causing a record shorter than the logical record length, the data will be padded with blanks before being written. The LINEEND character is never written to the stream in TEXT mode ; it only serves as an indicator of the end of a line. For fixed- or variable-format streams with the BINARY option, a full buffer indicates the end of a record.
A start value (1 is the only valid value in VM) may be given to specify the start of the stream.
Note: You will get an error if you try to overwrite a record with another record that has a different length.
We squeezed our minds to find an example where the use of CHAROUT() would give an advantage over other methods. CHAROUT() (and the other stream I/O functions also, as a matter of fact) are useful in following conditions:
As the second case can be useful in CMS, we include a sample below:
/* This exec creates a new file in SFS directory MYSUBDIR */ address command fileid='MY RDR .MYSUBDIR' /* note the directory id as filemode */ RdrFiles=diag(8,'Q R * ALL') /* get response from CP Q RDR * ALL */ call stream fileid,'C','OPEN REPLACE' /* open file for REPLACE */ call charout fileid,RdrFiles /* write the variable */ call stream fileid,'C','CLOSE' /* don't forget to close */
We repeat that it is just an example. A CMS PIPE command is shorter and faster too (even with the extra ACCESS):
ACCESS .MYSUBDIR Z PIPE CP Q RDR ALL!> MY RDR Z
but this can be written as (without accessing the directory):
PIPE CP Q RDR ALL!>SFS MY RDR .MYSUBDIR
>>--LINEOUT(--+------+--+--------------------------+--)------>< +-name-+ +-,--+--------+--+-------+-+ +-string-+ +-,line-+
returns the count of lines remaining after attempting to write string to the character output stream name. The count is either 0 (meaning the line was successfully written) or 1 (meaning that an error occurred while writing the line). The string can be the null string, in which case only the action associated with completing a line is taken.
On Personel Systems, LINEOUT() adds a line-feed and carriage return character to the end of the string.
You can specify a line number to set the write position to the start of a particular line in a persistent stream. This line number must be positive and within the bounds of the stream (though it can specify the line number immediately after the end of the stream). A value of 1 for line refers to the first line in the stream.
On Personel Systems, 1 is the only valid value for line.
All lines written to an F-format output stream will be padded with blanks as necessary. If the string is too long, a NOTREADY condition will be raised. For V-format streams, no padding or truncation is done and if the output string is null, a null line is written to the stream.
Note: You will get an error if you try to replace a record with another record that has a different length.
We give an example here. It is the same function as for the CHAROUT(), but here we have to loop over the records as the LINEEND character is not recognized:
/* This exec writes the output of CP QUERY RDR ALL to a file */ address command fileid='MY RDR .MYSUBDIR' RdrFiles=diag(8,'Q R * ALL') /* get CP output */ call stream fileid,'C','OPEN REPLACE' /* open with REPLACE */ do while RdrFiles<>'' parse var RdrFiles record '15'x RdrFiles call lineout fileid,record /* write the record */ end call stream fileid,'C','CLOSE' /* don't forget to close */ exit
|
Although we find this not natural, it is possible to close a file, opened by another stream function, by issuing: call lineout fileid so, without a string or line count. |
We have seen that CMS allows direct access to file records. This is true for read, but also for write. This means that you can replace a record in a file.
To replace a specific record in a file, you can for example use the linenum parameter of EXECIO, such as here where record 17 is replaced:
'EXECIO 1 DISKW EXISTING FILE A 17 (STRING This is a new string'
This, of course, poses no problem when you work with Fixed Length Records. But, if you work with Variable Length Records, then either you would destroy the subsequent record if you replace it by a longer string, or you would leave a hole if you write a shorter string. In these cases CMS would no longer be able to find the subsequent records, so it will truncate the file ! Forget about replacing records when working with Variable files.
If you would need to change records in variable files, then a first safe solution would be to
For large files, this is not an elegant solution as you need a lot of storage.
Let's review another technique for file update. Suppose we have a (long running) server procedure that has to update specific records in a file (in our example, there is one record for each user on the system, and the procedure updates an activity counter for the user).
You understand that it would be a waste of resources if we would read the file completely for each update. There is however a better approach, it consists in keeping the records in storage as long as possible. XEDIT comes to our rescue here ! Although we are discussing the update of records, we'd better start new topic, to give XEDIT the importance it merits...
We load the file in storage using XEDIT and keep using XEDIT to update the records. A commit is always possible by issuing a SAVE command from time to time or for each update as in our example here. Note that you can't let XEDIT use its AUTOSAVE feature, as this only works when XEDIT refreshes the screen.
The example will demonstrate yet another technique. We call it Self Contained EXECs.
! /* This exec sets itself in WAKEUP waiting for messages. These */ ! /* are written/replaced in a file, and a count per user is kept */ ! 1 ! Fid='MESSAGE FILE A' 2 ! Parse arg howcalled 3 ! if howacalled='$$RE--START$$' then signal Under_Xedit ! /*****************************************************************/ ! /* As we'll need XEDIT, we must be sure XEDIT is in the air */ ! /*****************************************************************/ 4 ! 'SUBCOM XEDIT' /* is XEDIT alive somewhere ? */ 5 ! if rc<>0 then do /* No, not yet, we'll have to start it */ 6 ! parse source . . myname . /* who are we ? */ 7 ! push 'COMMAND CMS EXEC' myname '$$RE--START$$' 8 ! 'XEDIT' fid '(NOPROF' /* start xedit */ 9 ! exit /* we'll come back here when all's done */ 10 ! end ! /*****************************************************************/ ! /* Main procedure when XEDIT is started */ ! /*****************************************************************/ ! Under_Xedit: 11 ! address XEDIT 12 ! 'XEDIT' fid '(NOPROF' /* get good file */ 13 ! 'COMMAND SET SYNONYM OFF#', ! 'COMMAND SET MACRO OFF#', ! 'COMMAND SET MSGMODE OFF' 14 ! do forever 15 ! address '' 'WAKEUP (IUCVMSG QUIET' 16 ! if rc=6 then call exit /* console interrupt */ 17 ! parse pull 10 FromUser . 19 MsgText 18 ! call Update_Msg_File 19 ! end ! /*****************************************************************/ ! /* Routine to update the file in storage */ ! /*****************************************************************/ ! Update_Msg_File: 20 ! '-* FIND' left(FromUser,8,'_') /* look for same user */ 21 ! if rc=0 then do 22 ! 'EXTRACT /CURLINE/' 23 ! parse var curline.3 . nbr . 24 ! 'REPLACE' left(FromUser,8) right(nbr,3,0) date() time() MsgText 25 ! end 26 ! else 'INPUT' left(FromUser,8) '001' date() time() MsgText ! 27 ! '-* MACRO SORT * 1 8' /* Sort the file on userid */ 28 ! 'SAVE' /* Write updated file on disk */ 29 ! return ! /*****************************************************************/ ! /* Exit routine */ ! /*****************************************************************/ ! Exit: 30 ! 'COMMAND QUIT' /* quit the file we XEDITed */ 31 ! exit
First thing to remark is that the filetype is EXEC and not XEDIT. This is a must, as we want to be able to start the procedure from CMS.
At the start of the procedure, the flow of events is the following:
What are the advantages of our self-contained procedure ? Well, first of all, we have only one file to maintain, and second, if we would start it from within the XEDIT environment (remember, FILELIST or RDRLIST enter also into XEDIT environments), then the SUBCOM XEDIT would return a code 0 and we would immediately jump to our Under_Xedit: routine. In short, our procedure can be executed from both CMS and XEDIT environments !
Note that the user will have seen nothing from the activity of XEDIT !
The procedure we just discussed uses the Data In Memory (DIM) technique. In general, loading and keeping data in memory will always be best for performance. Why else would the labs put so many efforts in developing and improving the VM Data Spaces or Minidisk Caching ? XEDIT is a useful tool for DIM. 
This is not a CMS Pipelines course, but it is impossible to ignore it either.
'PIPE < USER DIRECT !Find USER! > ALL-MY USERS A'
This pipeline has 3 stages: < (read a file), FIND and > (write a file). This is what happens when this pipeline is started:
You can conclude that, apart from the buffering used by < and >, only one single record flows through the pipeline(footnote 2).
In following examples, we now suppose that the size of the file USER DIRECT is 10000 records, of which 1000 cards are USER cards.
/*1*/ 'PIPE < USER DIRECT !FIND USER!SORT 6-13! > ALL-MY USERS A' /*2*/ 'PIPE < USER DIRECT !SORT 6-13!FIND USER! > ALL-MY USERS A'
|
Without being a great specialist of CMS Pipelines, can you say which of the above solutions is more friendly for your paging subsystem ? |
A Pipeline can also suddenly collapse. If we take only the first 13 records of a file:
'PIPE < USER DIRECT ! TAKE 13 ! > 13 CARDS A'
The TAKE stage, once it got 13 records will indicate an "end-of-file" condition to the > stage, which in turn will write the buffer to disk and close the output file. Both the TAKE and > stages then tell the CMS Pipeline dispatcher that they are done. The < stage will try to pump the next file record into the pipe, but will get a non-zero return code that indicates that nobody is "listening" anymore. The < stage will thus stop reading any further and close the input file.
Note: Don't conclude from this that the stages "talk" to each other. They only continue to work if input is provided, output is possible or their work is not yet completed.
There are other CMS Pipeline stages that can handle files, such as FILERAND (random file I/O) or FILEUPDATE. But we don't want to confuse you more here.
When you understand the power and the ease of use of CMS Pipelines, should we completely forget EXECIO then ?
Not completely:
Once you became a CMS Pipeline expert(footnote 3), you will be able to replace a lot of REXX coding by a simple CMS Pipeline. Then, the result will normally be much faster.
But here, we give you two cases where EXECIO is better suited than CMS Pipelines.
'EXECIO 1 DISKW MY LOGFILE A (STRING', date() time() 'I did just that' ... 'EXECIO 1 DISKW MY LOGFILE A (STRING', date() time() 'this exec did another thing' ... Exit: 'FINIS MY LOGFILE A'by:
'PIPE Literal' date() time() 'I did just that', '! >> MY LOGFILE A' /* append to logfile */ ... 'PIPE Literal' date() time() 'this exec did another thing', '! >> MY LOGFILE A' /* append to logfile */ ...
you will loose a lot in performance, as PIPE will close the file at each write operation, whereas EXECIO is able to keep the file open until you issue the FINIS command (or the FINIS option of the last EXECIO command in your procedure). Closing and re-opening a file means overhead (updates to the File Status Table, or in case of SFS, updating the catalog with the last reference date and time, etc.). Also, if a file is open, CMS will find it immediately, and has not to search for it on the accessed disk(s).
If you want to use PIPE anyway, then you would need to accumulate the records in storage (a REXX stem), and only write them out at end of job. For example:
... Call log date() time() 'I did just that' ... Call log date() time() 'this exec did another thing' ... Log: if symbol('LOGN')<>'VAR' then logn=0 logn=logn+1 ; log.logn=arg(1) return Exit: if symbol('LOGN')='VAR' then do log.0=logn ; 'PIPE STEM LOG.!>> MY LOGFILE A' ; end exit
But, if your procedure abends (due to syntax errors, HI or HX), you will loose logging records as the REXX storage containing the stem is cleared.
If you would however use:
'EXECIO 1 DISKW' fileid '(STRING I did just that'
your records are written in the file's I/O buffer, and when you ever reach the CMS Ready; state after the abend, your file will be closed by CMS (unless your virtual machine is forced off the system, of course).
Note: If you want full control over the commits and rollbacks of your File I/O, then the only good solution is to use SFS and handle the I/O with CSL routines.
Note: For files managed by SFS, at abend time, the updates are rolled back and the files are closed.
To limit storage needs, a typical EXECIO construction looks like this:
ReadAtOnce=1000 eof=0 do until eof 'EXECIO' ReadAtOnce 'HUGE FILE A (STEM REC.' if rc>2 then call errexit rc,'Problems with EXECIO' eof=(rc=2) do i=1 to Rec.0 .... handle the record ... end i end 'FINIS HUGE FILE A'
The same approach would be very difficult with CMS Pipelines, and anyway, you would end up with a situation where CMS Pipelines closes the file for every 1000 records.
(1)
SUBCOM is a CMS function, but is not explained in the CMS Command
Reference. You can find the complete information in the REXX
Reference Guide.
Back to text
(2)
Only a few CMS Pipeline
stages have no other option than
to buffer the records before they can proceed. An obvious
example is SORT.
Back to text
(3)
A CMS Pipeline expert is called a master plumber...
Back to text