Reputation: 31245
We've got a PHP application and want to count all the lines of code under a specific directory and its subdirectories.
We don't need to ignore comments, as we're just trying to get a rough idea.
wc -l *.php
That command works great for a given directory, but it ignores subdirectories. I was thinking the following comment might work, but it is returning 74, which is definitely not the case...
find . -name '*.php' | wc -l
What's the correct syntax to feed in all the files from a directory resursively?
Upvotes: 2095
Views: 1237099
Reputation: 404
You can also filter all files within any subdirectory of a given directory.
Example Usage (my case): I want to find all .js
, .ts
, .jsx
, and .tsx
files that aren't contained within the ./node_modules/
directory, or any of its subdirectories (to find out how much JS/TS code I've written in a given directory).
So, the command would be: find . -path '*/node_modules/*' -prune -o \( -name '*.js' -o -name '*.ts' -o -name '*.tsx' -o -name '*.jsx' \) -print | sed 's/.*/"&"/' | xargs wc -l
You can also add many such directories to prune as follows: find . \( -path '*/node_modules/*' -o -path '*/.next/*' -o -path '*/mstore-pro-4.5.0/*' \) -prune -o \( -name '*.js' -o -name '*.ts' -o -name '*.tsx' -o -name '*.jsx' \) -print | sed 's/.*/"&"/' | xargs wc -l
Code snippet inspired by @Peter Elespuru 's answer here
Upvotes: 1
Reputation: 2737
While I like the scripts, I prefer this one as it also shows a per-file summary as well as a total:
wc -l `find . -name "*.php"`
Upvotes: 7
Reputation: 2033
Here is a way to list the lines for each subdirectory individually:
for dir in `ls`; do echo $dir; find $dir -name '*.php' | xargs wc -l | grep total; done;
That outputs something like this:
my_dir1
305 total
my_dir2
108 total
my_dir3
1438 total
my_dir4
26496 total
Upvotes: 3
Reputation: 2071
bash
tools are always nice to use, but for this purpose it seems to be more efficient to just use a tool that does that. I played with some of the main ones as of 2022, namely cloc (perl), gocloc (go), pygount (python).
Got various results without tweaking them too much.
Seems the most accurate and blazingly fast is gocloc
.
Example on a small laravel project with a vue frontend:
$ ~/go/bin/gocloc /home/jmeyo/project/sequasa
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
JSON 5 0 0 16800
Vue 96 1181 137 8993
JavaScript 37 999 454 7604
PHP 228 1493 2622 7290
CSS 2 157 44 877
Sass 5 72 426 466
XML 11 0 2 362
Markdown 2 45 0 111
YAML 1 0 0 13
Plain Text 1 0 0 2
-------------------------------------------------------------------------------
TOTAL 388 3947 3685 42518
-------------------------------------------------------------------------------
$ cloc /home/jmeyo/project/sequasa
450 text files.
433 unique files.
40 files ignored.
github.com/AlDanial/cloc v 1.90 T=0.24 s (1709.7 files/s, 211837.9 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
JSON 5 0 0 16800
Vuejs Component 95 1181 370 8760
JavaScript 37 999 371 7687
PHP 180 1313 2600 5321
Blade 48 180 187 1804
SVG 27 0 0 1273
CSS 2 157 44 877
XML 12 0 2 522
Sass 5 72 418 474
Markdown 2 45 0 111
YAML 4 11 37 53
-------------------------------------------------------------------------------
SUM: 417 3958 4029 43682
-------------------------------------------------------------------------------
$ pygount --format=summary /home/jmeyo/project/sequasa
┏━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━┳━━━━━━┓
┃ Language ┃ Files ┃ % ┃ Code ┃ % ┃ Comment ┃ % ┃
┡━━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━╇━━━━━━┩
│ JSON │ 5 │ 1.0 │ 12760 │ 76.0 │ 0 │ 0.0 │
│ PHP │ 182 │ 37.1 │ 4052 │ 43.8 │ 1288 │ 13.9 │
│ JavaScript │ 37 │ 7.5 │ 3654 │ 40.4 │ 377 │ 4.2 │
│ XML+PHP │ 43 │ 8.8 │ 1696 │ 89.6 │ 39 │ 2.1 │
│ CSS+Lasso │ 2 │ 0.4 │ 702 │ 65.2 │ 44 │ 4.1 │
│ SCSS │ 5 │ 1.0 │ 368 │ 38.2 │ 419 │ 43.5 │
│ HTML+PHP │ 2 │ 0.4 │ 171 │ 85.5 │ 0 │ 0.0 │
│ Markdown │ 2 │ 0.4 │ 86 │ 55.1 │ 4 │ 2.6 │
│ XML │ 1 │ 0.2 │ 29 │ 93.5 │ 2 │ 6.5 │
│ Text only │ 1 │ 0.2 │ 2 │ 100.0 │ 0 │ 0.0 │
│ __unknown__ │ 132 │ 26.9 │ 0 │ 0.0 │ 0 │ 0.0 │
│ __empty__ │ 6 │ 1.2 │ 0 │ 0.0 │ 0 │ 0.0 │
│ __duplicate__ │ 6 │ 1.2 │ 0 │ 0.0 │ 0 │ 0.0 │
│ __binary__ │ 67 │ 13.6 │ 0 │ 0.0 │ 0 │ 0.0 │
├───────────────┼───────┼───────┼───────┼───────┼─────────┼──────┤
│ Sum │ 491 │ 100.0 │ 23520 │ 59.7 │ 2173 │ 5.5 │
└───────────────┴───────┴───────┴───────┴───────┴─────────┴──────┘
Results are mixed, the closest to the reality seems to be the gocloc
one, and is also by far the fastest:
Upvotes: 14
Reputation: 376
if u use window so easly 2 steps:
choco install cloc
cloc project-example
sreens with steps:
p.s. need move or remove folder with build project and node_modules
Upvotes: -2
Reputation: 274
You can use this windows power shell code to count any file type that you want :
(gci -include *.cs,*.cshtml -recurse | select-string .).Count
Upvotes: -2
Reputation: 3861
There is a little tool called sloccount to count the lines of code in a directory.
It should be noted that it does more than you want as it ignores empty lines/comments, groups the results per programming language and calculates some statistics.
Upvotes: 24
Reputation: 267
On Windows PowerShell try this:
dir -Recurse *.php | Get-Content | Measure-Object -Line
Upvotes: 4
Reputation: 33535
Try:
find . -name '*.php' | xargs wc -l
or (when file names include special characters such as spaces)
find . -name '*.php' | sed 's/.*/"&"/' | xargs wc -l
The SLOCCount tool may help as well.
It will give an accurate source lines of code count for whatever hierarchy you point it at, as well as some additional stats.
Sorted output:
find . -name '*.php' | xargs wc -l | sort -nr
Upvotes: 3312
Reputation: 50220
If you want to keep it simple, cut out the middleman and just call wc
with all the filenames:
wc -l `find . -name "*.php"`
Or in the modern syntax:
wc -l $(find . -name "*.php")
This works as long as there are no spaces in any of the directory names or filenames. And as long as you don't have tens of thousands of files (modern shells support really long command lines). Your project has 74 files, so you've got plenty of room to grow.
Upvotes: 9
Reputation: 9348
Similar to Shizzmo's answer, but uglier and more accurate. If you're using it often, modify it to suit and put it in a script.
This example:
find
)find . \! \( \( -path ./lib -o -path ./node_modules -o -path ./vendor -o -path ./any/other/path/to/skip -o -wholename ./not/this/specific/file.php -o -name '*.min.js' -o -name '*.min.css' \) -prune \) -type f \( -name '*.php' -o -name '*.inc' -o -name '*.js' -o -name '*.scss' -o -name '*.css' \) -print0 | xargs -0 cat | grep -vcE '^[[:space:]]*$'
Upvotes: 3
Reputation: 2605
You can use a utility called codel
(link). It's a simple Python module to count lines with colorful formatting.
pip install codel
To count lines of C++ files (with .cpp
and .h
extensions), use:
codel count -e .cpp .h
You can also ignore some files/folder with the .gitignore format:
codel count -e .py -i tests/**
It will ignore all the files in the tests/
folder.
The output looks like:
You also can shorten the output with the -s
flag. It will hide the information of each file and show only information about each extension. The example is below:
Upvotes: 5
Reputation: 1552
The tool Tokei displays statistics about code in a directory. Tokei will show the number of files, total lines within those files and code, comments, and blanks grouped by language. Tokei is also available on Mac, Linux, and Windows.
An example of the output of Tokei is as follows:
$ tokei
-------------------------------------------------------------------------------
Language Files Lines Code Comments Blanks
-------------------------------------------------------------------------------
CSS 2 12 12 0 0
JavaScript 1 435 404 0 31
JSON 3 178 178 0 0
Markdown 1 9 9 0 0
Rust 10 408 259 84 65
TOML 3 69 41 17 11
YAML 1 30 25 0 5
-------------------------------------------------------------------------------
Total 21 1141 928 101 112
-------------------------------------------------------------------------------
Tokei can be installed by following the instructions on the README file in the repository.
Upvotes: 34
Reputation: 162
If you want to count LOC you have written, you may need to exclude some files.
For a Django project, you may want to ignore the migrations
and static
folders. For a JavaScript project, you may exclude all pictures or all fonts.
find . \( -path '*/migrations' -o -path '*/.git' -o -path '*/.vscode' -o -path '*/fonts' -o -path '*.png' -o -path '*.jpg' -o -path '*/.github' -o -path '*/static' \) -prune -o -type f -exec cat {} + | wc -l
Usage here is as follows:
*/folder_name
*/.file_extension
To list the files, modify the latter part of the command:
find . \( -path '*/migrations' -o -path '*/.git' -o -path '*/.vscode' -o -path '*/fonts' -o -path '*.png' -o -path '*.jpg' -o -path '*/.github' -o -path '*/static' \) -prune -o --print
Upvotes: 2
Reputation: 2658
Here's a flexible one using older Python (works in at least Python 2.6) incorporating Shizzmo's lovely one-liner. Just fill in the types
list with the filetypes you want counted in the source folder, and let it fly:
#!/usr/bin/python
import subprocess
rcmd = "( find ./ -name '*.%s' -print0 | xargs -0 cat ) | wc -l"
types = ['c','cpp','h','txt']
sum = 0
for el in types:
cmd = rcmd % (el)
p = subprocess.Popen([cmd],stdout=subprocess.PIPE,shell=True)
out = p.stdout.read().strip()
print "*.%s: %s" % (el,out)
sum += int(out)
print "sum: %d" % (sum)
Upvotes: 2
Reputation: 4133
It’s very easy with Z shell (zsh) globs:
wc -l ./**/*.php
If you are using Bash, you just need to upgrade. There is absolutely no reason to use Bash.
Upvotes: 4
Reputation:
If you're on Linux (and I take it you are), I recommend my tool polyglot. It is dramatically faster than either sloccount
or cloc
and it is more featureful than sloccount
.
You can invoke it with
poly .
or
poly
so it's much more user-friendly than some convoluted Bash script.
Upvotes: 3
Reputation: 2027
I do it like this:
Here is the lineCount.c file implementation:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int getLinesFromFile(const char*);
int main(int argc, char* argv[]) {
int total_lines = 0;
for(int i = 1; i < argc; ++i) {
total_lines += getLinesFromFile(argv[i]); // *argv is a char*
}
printf("You have a total of %d lines in all your file(s)\n", total_lines);
return 0;
}
int getLinesFromFile(const char* file_name) {
int lines = 0;
FILE* file;
file = fopen(file_name, "r");
char c = ' ';
while((c = getc(file)) != EOF)
if(c == '\n')
++lines;
fclose(file);
return lines;
}
Now open the command line and type gcc lineCount.c
. Then type ./a.out *.txt
.
This will display the total lines of files ending with .txt in your directory.
Upvotes: 2
Reputation: 321
First change the directory to which you want to know the number of lines.
For example, if I want to know the number of lines in all files of a directory named sample, give $cd sample
.
Then try the command $wc -l *
. This will return the number of lines for each file and also the total number of lines in the entire directory at the end.
Upvotes: 2
Reputation: 93
You don't need all these complicated and hard to remember commands. You just need a Python tool called line-counter.
A quick overview
This is how you get the tool
$ pip install line-counter
Use the line
command to get the file count and line count under current directory (recursively):
$ line
Search in /Users/Morgan/Documents/Example/
file count: 4
line count: 839
If you want more detail, just use line -d
.
$ line -d
Search in /Users/Morgan/Documents/Example/
Dir A/file C.c 72
Dir A/file D.py 268
file A.py 467
file B.c 32
file count: 4
line count: 839
And the best part of this tool is, you can add a .gitignore
-like configuration file to it. You can set up rules to select or ignore what kind of files to count just like what you do in '.gitignore'.
More description and usage is here: https://github.com/MorganZhang100/line-counter
Upvotes: 3
Reputation: 28742
I wanted to check on multiple file types and was to lazy to calculate the total by hand. So I use this now to get the total in one go.
find . -name '*.js' -or -name '*.php' | xargs wc -l | grep 'total' | awk '{ SUM += $1; print $1} END { print "Total text lines in PHP and JS",SUM }'
79351
15318
Total text lines in PHP and JS 94669
This allows you to chain multiple extension types you wish to filter on. Just add them in the -name '*.js' -or -name '*.php'
part, and possibly modify the otuput message to your liking.
Upvotes: 3
Reputation: 47010
None of the answers so far gets at the problem of filenames with spaces.
Additionally, all that use xargs
are subject to fail if the total length of paths in the tree exceeds the shell environment size limit (defaults to a few megabytes in Linux).
Here is one that fixes these problems in a pretty direct manner. The subshell takes care of files with spaces. The awk
totals the stream of individual file wc
outputs, so it ought never to run out of space. It also restricts the exec
to files only (skipping directories):
find . -type f -name '*.php' -exec bash -c 'wc -l "$0"' {} \; | awk '{s+=$1} END {print s}'
Upvotes: 11
Reputation: 133
Excluding blank lines:
find . -name "*.php" | xargs grep -v -c '^$' | awk 'BEGIN {FS=":"} { $cnt = $cnt + $2} END {print $cnt}'
Including blank lines:
find . -name "*.php" | xargs wc -l
Upvotes: 2
Reputation: 75057
I may as well add another OS X entry, this one using plain old find with exec
(which I prefer over using xargs, as I have seen odd results from very large find
result sets with xargs in the past).
Because this is for OS X, I also added in the filtering to either .h or .m files - make sure to copy all the way to the end!
find ./ -type f -name "*.[mh]" -exec wc -l {} \; | sed -e 's/[ ]*//g' | cut -d"." -f1 | paste -sd+ - | bc
Upvotes: 0
Reputation: 21528
I used this inline-script that I launch from on a source project's directory:
for i in $(find . -type f); do rowline=$(wc -l $i | cut -f1 -d" "); file=$(wc -l $i | cut -f2 -d" "); lines=$((lines + rowline)); echo "Lines["$lines"] " $file "has "$rowline"rows."; done && unset lines
That produces this output:
Lines[75] ./Db.h has 75rows.
Lines[143] ./Db.cpp has 68rows.
Lines[170] ./main.cpp has 27rows.
Lines[294] ./Sqlite.cpp has 124rows.
Lines[349] ./Sqlite.h has 55rows.
Lines[445] ./Table.cpp has 96rows.
Lines[480] ./DbError.cpp has 35rows.
Lines[521] ./DbError.h has 41rows.
Lines[627] ./QueryResult.cpp has 106rows.
Lines[717] ./QueryResult.h has 90rows.
Lines[828] ./Table.h has 111rows.
Upvotes: 1
Reputation: 19451
On Unix-like systems, there is a tool called cloc
which provides code statistics.
I ran in on a random directory in our code base it says:
59 text files.
56 unique files.
5 files ignored.
http://cloc.sourceforge.net v 1.53 T=0.5 s (108.0 files/s, 50180.0 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
C 36 3060 1431 16359
C/C++ Header 16 689 393 3032
make 1 17 9 54
Teamcenter def 1 10 0 36
-------------------------------------------------------------------------------
SUM: 54 3776 1833 19481
-------------------------------------------------------------------------------
Upvotes: 119
Reputation: 21
I have BusyBox installed on my Windows system. So here is what I did.
ECHO OFF
for /r %%G in (*.php) do (
busybox grep . "%%G" | busybox wc -l
)
Upvotes: 1
Reputation: 46883
A straightforward one that will be fast, will use all the search/filtering power of find
, not fail when there are too many files (number arguments overflow), work fine with files with funny symbols in their name, without using xargs
, and will not launch a uselessly high number of external commands (thanks to +
for find
's -exec
). Here you go:
find . -name '*.php' -type f -exec cat -- {} + | wc -l
Upvotes: 12
Reputation: 1146
WC -L ? better use GREP -C ^
wc -l
? Wrong!
The wc command counts new lines codes, not lines! When the last line in the file does not end with new line code, this will not be counted!
If you still want count lines, use grep -c ^. Full example:
# This example prints line count for all found files
total=0
find /path -type f -name "*.php" | while read FILE; do
# You see, use 'grep' instead of 'wc'! for properly counting
count=$(grep -c ^ < "$FILE")
echo "$FILE has $count lines"
let total=total+count #in bash, you can convert this for another shell
done
echo TOTAL LINES COUNTED: $total
Finally, watch out for the wc -l
trap (counts enters, not lines!!!)
Upvotes: 7