Hi Gurus, Could you please help me to write a script to read a text file word by word separated by space. I want to pick the strings from text file and store it in a different file Thanks

By the way, its not 100% clear WHAT you want to do, try this (it caters for non alphanumerics).
awk ‘{for(i=1;i<=NF;i++) print $i }’ .

See if the man page from tr can help you.

Basically you design a character set that represents a word. Maybe A-Za-z or A-Za-z0-9_ or whatever.

Then you use tr options to do 3 things:

  1. Use the Converse of your character set as characters to translate.

  2. Translate all such characters to a newline

  3. Squeeze all repeated translated characters to a single one.

Something like:

tr -cs ‘A-Za-z0-9_’ ‘012’ < InFile > OutFile

Result is a stream of text, one word per line.

what is the philosophy of reading word by word?
do u want to extract/filter some words from text file?
is there any mathing criteria that you forgot to say?

as we all know, there are many text and stream editing tools in linux/unix but the main ideal has a key role to use proper tool.
I think this is a very general question and when can’t expect a specific answer.
better to say what is behind your question.

To get all words in the input file:
cat TextFile | tr " " “n” > WordFile

… or to get all word types:
cat TextFile | tr " " “n” | sort -u > WordFile

etc.

Hi,

See the below for a very, very simple example

#!/bin/bash

while read name family age
do
echo your name is $name >> OUTPUT_FILE
echo your family is $family >>OUTPUT_FILE
echo your age is $age >>OUTPUT_FILE
done < INPUT_FILE

INPUT_FILE contains

jones marta 12
fari mehaban 30
robert mobarak 43