The Web server (running the Web site) thinks that the HTTP data stream sent from the client (e.g. Your Web browser or our CheckUpDown robot) was correct, but access to the URL resource requires the prior use of a proxy server that needs some authentication which has not been provided.
I have a list of URLS that I need to check, to see if they still work or not. I would like to write a bash script that does that for me.
I only need the returned HTTP status code, i.e. 200, 404, 500 and so forth. Nothing more.
EDIT Note that there is an issue if the page says '404 not found' but returns a 200 OK message. It's a misconfigured web server, but you may have to consider this case.
![Status Status](/uploads/1/2/5/2/125276786/794116140.bmp)
For more on this, see Check if a URL goes to a page containing the text '404'
ManuManu
6 Answers
Curl has a specific option,
--write-out
, for this:-o /dev/null
throws away the usual output--silent
throws away the progress meter--head
makes a HEAD HTTP request, instead of GET--write-out '%{http_code}n'
prints the required status code
To wrap this up in a complete Bash script:
(Eagle-eyed readers will notice that this uses one curl process per URL, which imposes fork and TCP connection penalties. It would be faster if multiple URLs were combined in a single curl, but there isn't space to write out the monsterous repetition of options that curl requires to do this.)
PhilPhil
user551168user551168
Extending the answer already provided by Phil. Adding parallelism to it is a no brainer in bash if you use xargs for the call.
Here the code:
-n1: use just one value (from the list) as argument to the curl call
-P10: Keep 10 curl processes alive at any time (i.e. 10 parallel connections)
Check the
write_out
parameter in the manual of curl for more data you can extract using it (times, etc).In case it helps someone this is the call I'm currently using:
It just outputs a bunch of data into a csv file that can be imported into any office tool.
estaniestani
Use
curl
to fetch the HTTP-header only (not the whole file) and parse it:dogbanedogbane
wget -S -i *file*
will get you the headers from each url in a file.Filter though
grep
for the status code specifically.colinrosscolinross
This relies on widely available
wget
, present almost everywhere, even on Alpine Linux.The explanations are as follow :
--quiet
Turn off Wget's output.
Source - wget man pages
--spider
[ ... ] it will not download the pages, just check that they are there. [ ... ]
Source - wget man pages
--server-response
Print the headers sent by HTTP servers and responses sent by FTP servers.
Source - wget man pages
What they don't say about
--server-response
is that those headers output are printed to standard error (sterr), thus the need to redirect to stdin.The output sent to standard input, we can pipe it to
awk
to extract the HTTP status code. That code is :- the second (
$2
) non-blank group of characters:{$2}
- on the very first line of the header:
NR1
And because we want to print it...
{print $2}
.Salathiel GenèseSalathiel Genèse
protected by codeforesterNov 18 '18 at 7:14
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
Would you like to answer one of these unanswered questions instead?