<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
VA wrote:
<blockquote cite="mid:422975.34274.qm@web57004.mail.re3.yahoo.com"
type="cite">
<table border="0" cellpadding="0" cellspacing="0">
<tbody>
<tr>
<td
style="font-family: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; font-size: inherit; line-height: inherit; font-size-adjust: inherit; font-stretch: inherit;"
valign="top">Does WGET have the option to obtain only the text of the
webpage only?<br>
<br>
The recursive downloading option allows me to create the website
locally, but I want the content (the text) only, from all html's on the
site in one text (ascii) file. <br>
<br>
Any ideas?<br>
<br>
Thanks,<br>
Virginia<br>
<br>
</td>
</tr>
</tbody>
</table>
<br>
<pre wrap="">
<hr size="4" width="90%">
_______________________________________________
nmglug mailing list
<a class="moz-txt-link-abbreviated" href="mailto:nmglug@nmglug.org">nmglug@nmglug.org</a>
<a class="moz-txt-link-freetext" href="https://nmglug.org/mailman/listinfo/nmglug">https://nmglug.org/mailman/listinfo/nmglug</a>
</pre>
</blockquote>
Use WGET in conjunction with HTML2TXT or HTML2RTF depending on what you
are really trying to do and then just CAT them together. A bit of
scripting around these tools should get you a single command line that
will give you a complete<br>
<br>
Andy<br>
</body>
</html>