The reason for switching from batch files to powershell scripts is to improve error checking of the process. Does the cmdlet for copying have advantages in this regard?
If a batch file already exists that uses xcopy to copy files by filename individually is there any advantage to converting the syntax to copy-item?
What are the advantages of using robocopy, xcopy and copy-item (compared to each other)? For example does robocopy have an advantage when working with a large number of small files over a reliable network. If this script is to be run simultaneously on hundreds of computers to copy hundreds of files to each of them will that affect the decision? Should the decision be focused mainly on the permissions of the files?
The primary advantage is just that you can send objects to
Copy-Item
through a pipe instead of strings or filespecs. So you could do:That's kind of a poor example (you could do that with
Copy-Item -Filter
), but it's an easy one to come up with on-the-fly. It's pretty common when working with files to end up with a pipeline fromGet-ChildItem
, and I personally tend to do that a lot just because of the-Recurse -Include
bug withRemove-Item
.You also get PowerShell's error trapping, special parameters like
-Passthru
,-WhatIf
,-UseTransaction
, and all the common parameters as well.Copy-Item -Recurse
can replicate some of xcopy's tree copying abilities, but it's pretty bare-bones.Now, if you need to maintain ACLs, ownership, auditing, and the like, then
xcopy
orrobocopy
are probably going to be much easier because that functionality is built in. I'm not sure howCopy-Item
handles copying encrypted files to non-encrypted locations (xcopy has some ability to do this), and I don't believeCopy-Item
supports managing the archive attribute directly.If it's speed you're looking for, then I would suspect that xcopy and robocopy would win out. Managed code has higher overhead in general. Xcopy and robocopy also offer a lot more control over how well they work with the network.