I have a small elisp script which applies Perl::Tidy on region or whole file. For reference, here's the script (borrowed from EmacsWiki):
(defun perltidy-command(start end)
"The perltidy command we pass markers to."
(shell-command-on-region start
end
"perltidy"
t
t
(get-buffer-create "*Perltidy Output*")))
(defun perltidy-dwim (arg)
"Perltidy a region of the entire buffer"
(interactive "P")
(let ((point (point)) (start) (end))
(if (and mark-active transient-mark-mode)
(setq start (region-beginning)
end (region-end))
(setq start (point-min)
end (point-max)))
(perltidy-command start end)
(goto-char point)))
(global-set-key "\C-ct" 'perltidy-dwim)
I'm using current Emacs 23.1 for Windows (EmacsW32). The problem I'm having is that if I apply that script on a UTF-8 coded file ("U(Unix)" in the status bar) the output comes back Latin-1 coded, i.e. two or more characters for each non-ASCII source character.
Is there any way I can fix that?
EDIT: Problem seems to be solved by using (set-terminal-coding-system 'utf-8-unix)
in my init.el
. In anyone has other solutions, go ahead and write them!
Below are from
shell-command-on-region
documentDuring executing, it looks for coding system from
process-coding-system-alist
at first, if it's nil, then looks fromdefault-process-coding-system
.If your want to change the encoding, you can add your converting option to
process-coding-system-alist
, below are the content of it.Or, if you didn't set
process-coding-system-alist
, it's nil, you could assign your encoding option todefault-process-coding-system
,for example:
(If input is encoded as
utf-8
, then output encoded asutf-8
)Or
I also wrote a post about this if you want details.