Problem solve Get help with specific problems with your technologies, process and projects.

Removing duplicate records from a physical file

I have a file (900000 records 1G) with multiple records with the same key (not very useful but result of multiple copy). I would like to remove these duplicate records by a simple way (SQL, ...)

I already tried to create a file with unique key, then copy the data, but Its very long (probably because index are computed for each record).

I've seen some SQL advice, but the last step (delete command on a view on the file) doesn't work on my V4R5.

DELETE FROM tmp/cq0942xxxx
    WHERE DUPRRN in (SELECT MRRN FROM tmp/cq0942xxx) 

It really depends on the columns involved in your definition of "duplicate data". This query assumes that the column with the unique identifier is fine, but two other columns have duplicate data.

WHERE EXISTS (SELECT idcol FROM mytab innertab
   WHERE innertab.Col1 = mytab.FirstName
     AND innertab.Col2 = mytab.LastName
     AND innertab.idcol < mytab.idcol ) 

Just about every method is going to be fairly slow when a large number of rows have to be scanned & processed.


Search400.com's targeted search engine: Get relevant information on DB2/400.

The Best Web Links: tips, tutorials and more.

Check out this online event, Getting the Most out of SQL & DB2 UDB for the iSeries.

Dig Deeper on Oracle on iSeries