最後更新: 2019-06-05


  • 設定檔
  • Background 下載
  • 續存
  • 下載整個目錄
  • veiw header
  • Download list of file
  • Mirror Website
  • 下載目錄內某類檔案
  • login
  • Limit Speed
  • 其他 Opts
  • wget 401 then 200
  • Drupal cron jobs
  • Other Tools


wget parallel downloads






# For store password in wgetrc

wget --config=/path/to/wgetrc ...


Example: Background 下載


wget -t 45 -o log.txt http://link &


wget -b -t 45 -c -o log http://link

# -b,  --background (相當於 &)

# -o logfile                                                       # Log all messages to logfile.

# --tries=45                                                    # Defaut: 20, infinity: 0


Example: 續存

wget -c bigfile

# -c 續存


Example: 下載整個目錄

wget -cp http://link/directory/

# -p  ‘--page-requisites’                                    # 下載 directory 內所有檔案


Example: veiw header

# 下載前會看到 header (--server-response)

wget -S http://web-site/


Example: Download list of file

wget -nc -i dl.file

# -i <file>                         file 內是一行一條 link 的      

# -nc, --no-clobber           不再 Download 以存在的 File, 就算它未完整(與 -c 正好相反)


Mirror Website


方法1: -r

wget --convert-links -N -l2  -P/tmp -r http://www.gnu.org/

# -r                                    Create a mirror of the GNU web site (default 5 level)
# -l                                    下載幾多層內的 file (nested levels)

# -P                                   saving them to /tmp ( Default PREFIX "." )
# --convert-links                view the documents off-line
# -N,  --timestamping        don't re-retrieve files unless newer than local.

# -nd                                 不建立目錄 (假設 URL 是 A/B/C 沒有 -nd  時會建立 A/B/C 目錄)
# -np                                 not to recurse to the parent directory

-I                                       comma-separated list of directories included in the retrieval.
                                         Any other directories will simply be ignored. The directories are absolute paths.

-L                                       Follow relative links only, 以下的不是 relative links
                                          <a href="/foo.gif">
                                          <a href="/foo/bar.gif">
                                          <a href="http://www.server.com/foo/bar.gif">

-D <url>                            allows you to specify the domains that will be followed,
                                         thus limiting the recursion only to the hosts that belong to these domains.


# 下載回來後有目錄 "rpm.hlhk.net/mysql/c7/m80"

wget -r -l 1 -np https://dl.dahunter.org/mysql/c7/m80/

# 下載 file 到當前 "."

wget -r -l 1 -nd -np https://dl.dahunter.org/mysql/c7/m80/

方法2: "-m"

wget -m -w 5 http://www.gnu.org/

  • -m,  --mirror                     相當於  -N -r -l inf --no-remove-listing.  ( -l inf 相當於 -l 0 )
  • -k,   --convert-links
  • -w,  --wait                         下載一檔案後, 等一定時間才下載另一個, 單位 sec.



Example: 下載目錄內某類檔案

wget -r -l1  -A'.gif,.swf,.css,.html,.htm,.jpg,.jpeg' <url>


Example: Limit Speed


--limit-rate= 100k

# 限速, 單位是 byte, 可以配合 k, m 使用

-N,  --timestamping

# 只下載較新的 file





wget -O - ftp://USER:PASS@server/README


# -O - 把下載好的檔案內容 outpurt 到 -
# -O file


  • --user=USER --password=PASS  # 不加 --password 是不會問 password 的
  • --user=USER --ask-password      # wget prompt for password
  • --use-askpass=command            # If no command is specified, ENV - WGET_ASKPASS is used


chmod 600 ~/.wgetrc



# ask_password = on/off


其他 Opts


(-U)--user-agent="user agent"







wget 401 then 200


401: 時會由 Server 返回 realm="..."

200: wget 傳出 password 並 login

wget and most other programs request a basic authentication challenge from the server side before sending the credentials.

This is wget's default behavior since version 1.10.2.

You can change that behaviour using --auth-no-challenge option


Drupal cron jobs


-O file


0 * * * * wget -O - -q http://?????  > /dev/null 2>&1  &&  touch /root/getlink


Other Tools


Parallel download tools