Troubleshooting TPS corruption
Originally published in Clarion Magazine.
Although TPS files are generally quite reliable, every once in a while someone posts an urgent message in the SoftVelocity newsgroups about TPS file corruption on a network. Eric Vail recently posted the following list of questions and answers, reprinted here with Eric's permission. For additional resources, see Mark Riffey's network troubleshooting checklist.
Questions to ask yourself
- Is your program the only application sharing data over the LAN?
- When did data corruption begin?
- What changed in the LAN at that point?
- Was there a new program loaded or some new hardware introduced?
- Is there enough open space on the server where the data is stored?
- When is the last time the cooling fans were checked on the processors in the server and or the case for that matter? When you are running a database the server processors run hotter than usual so if the fans are failing it can cause the server to mess up.
- Does the server ever lock up? What about the workstations? This could be caused by a flaky power supply and/or RAM.
- When was the last time the server or workstation disks were defragged? (NT does NOT defrag on the fly like Novell.) There are some utilities out there but usually we partition the drives C and D and put the boot and system stuff on C and all programs and data on D. This way when we do maintenance we can just copy away the D data somewhere, then reformat the drive and do a scandisk before copying back the programs and data. On W2k you can defrag the partitions in the computer manager.
Verifying network integrity
If none of the above solves the problem, then I would start looking at network hardware, specifically the network card and cables for the server. Here is an easy way to verify network integrity.
Get a block of data that you can verify size and number of files on. I usually use the I386 directory from an NT CD. It is large and has lots of big and little files in it so you get a good sampling of all data types and sizes.
Now put that directory onto the server somewhere and then copy it from the drive on the server in the following sequence.
- Copy from the server to each station. Time it and see how long it takes, then make sure that all the data makes it. Do a right click properties on the folder and make sure the number of files matches the server. Also make surethe total size of the files matches - not the space the files take up on disk, but the total file size, which is the number above the total disk space. That is the true size of the files.
- Next do the same from station to station.
If you find that one station-to-station copy is slower than another, or some files were dropped, move the cable in the hub first and make sure that the hub is okay. I have seen where one port in a hub will go bad and cause all kinds of flaky problems. The problem could also be a loose cable or a bad network card.
If all server-to-workstation and workstation-to-workstation copies are the same then you need to go back to the server and check the following:
- How much RAM is in the server and what type. The problem may be memory going bad.
- What type of hard drives are in the server and what configuration. If they are mirrored are they healthy? If RAID 5 are they optimal?
- Is the swap file size adequate? It should be twice the size of the physical RAM if you are sharing data files on it. That is not the default setting by the way. So if they have 256 meg of RAM then it should be 512 meg minimum, with a 768 meg ceiling.
Microsoft Security Essentials
MSE has been a real headache for Clarion developers using TPS files.In January of 2011 Robert Paresi posted the following way of detecting MSE's presence:
if GetReg(REG_LOCAL_MACHINE,'SOFTWARE\Microsoft\Microsoft Security Essentials','Market') <> '' OR | GetReg(REG_LOCAL_MACHINE,'SOFTWARE\Microsoft\Microsoft Antimalware','InstallLocation') message('You cannot use this program with TPS file database.','Microsoft Security Essentials Installed',icon:exclamation) do procedureRETURN end