s12chung
s12chung

Reputation: 1768

How to get Rails.logger printing to the console/stdout when running rspec?

Same as title: How to get Rails.logger printing to the console/stdout when running rspec? Eg.

Rails.logger.info "I WANT this to go to console/stdout when rspec is running"
puts "Like how the puts function works"

I still want Rails.logger to go to log/test.log too.

Upvotes: 101

Views: 152006

Answers (8)

JakeRobb
JakeRobb

Reputation: 1980

Building on @Jared Beck's answer:

Create this file.

#!/usr/bin/env bash
set -euo pipefail
LOGFILE="$1"
shift

tail -fn0 "${LOGFILE}" > >(
  SHORTNAME="$(basename "${LOGFILE}")"
  BOLDBLUE=$(printf "\e[1;34m")
  PLAIN=$(printf "\e[0m")
  sed -E 's/^/'"${BOLDBLUE}<${SHORTNAME}>${PLAIN}"' /'
) &
TAIL_PID=$!
$@
EXIT_CODE=$?
kill ${TAIL_PID} >/dev/null 2>&1 || :
exit ${EXIT_CODE}

I saved mine as ~/bin/tailwith. ~/bin/ is on my path.

Make it executable:

chmod +x ~/bin/tailwith

Then, use as follows:

> tailwith log/test.log bundle exec rspec path/to/test_spec.rb

The first argument is your log file; the script starts tail in the background and captures its PID. The rest of the arguments are executed as if they were a separate command. When that command exits, the script kills the tail process.

As a bonus, I made it prefix each line from the log file with the log file's name in angle brackets, colored blue. Since the output is interleaved with rspec's output, this makes it so you can tell which is which.

Note that the interleaving will not be perfectly timed. Ruby has to flush the log's buffer to the log file, and then tail has to notice the change, read the additional line(s), and write them out. That takes a handful of milliseconds on a typical machine, during which time Ruby itself will continue processing and might write more to stdout directly. You should expect log output lines to appear later in the combined output than what the code would lead you to expect.

Variations

If you don't want the prefix:

#!/usr/bin/env bash
set -euo pipefail
LOGFILE="$1"
shift

tail -fn0 "${LOGFILE}" &
TAIL_PID=$!
$@
EXIT_CODE=$?
kill ${TAIL_PID} >/dev/null 2>&1 || :
exit ${EXIT_CODE}

If you want the prefix but not the colors:

#!/usr/bin/env bash
set -euo pipefail
LOGFILE="$1"
shift

tail -fn0 "${LOGFILE}" > >(
  SHORTNAME="$(basename "$LOGFILE")"
  sed -E "s/^/<${SHORTNAME}> /"
) &
TAIL_PID=$!
$@
EXIT_CODE=$?
kill ${TAIL_PID} >/dev/null 2>&1 || :
exit ${EXIT_CODE}

Explanation

For those who are interested, here's how the script works. Since this question is not really about shell scripting, I'll assume no prior knowledge. There's a lot of trickery in here! 😁

#!/usr/bin/env bash

This is called a "shebang line," which is derived from the "hash-bang" symbol pair at the beginning. Note that it's pronounced "sha-BANG," not "SHE-bang." It tells your shell how to run the script. In this case, we're telling your shell to use a tool called env to find a locally installed shell that is Bash-compatible. We could just use #!/bin/bash and it would work on most systems, but the env approach is more portable.

set -euo pipefail

Here, we're setting three shell options (shopts) for our script:

-e tells Bash that the script should exit if any statement fails. Without this, Bash will proceed after a failure, which can lead to unexpected behavior if you're used to pretty much any other modern language.

-u tells Bash that a reference to an undefined variable should be treated as an error (which, paired with -e, means that the script will exit and give you a useful error message).

-o means "and here are some additional options to enable by name". Some options have single-character shorthands, but not all.

The pipefail option means that if a command before the end of a pipeline -- a chain of commands linked with pipes (|) -- fails, the pipeline itself should fail.

Together, these three options provide confidence that the script is working as expected. I never work without them!

LOGFILE="$1"

Capture the first argument given to the script as a variable called LOGFILE. The double quotes ensure correct processing if the file path contains spaces. (Without the quotes, Bash would interpret the parts of the path before and after the space as separate arguments, which is not what we want! This is a highly complex topic that has been the cause of ruined days for decades -- search for "bash word splitting" if you want to learn more.)

shift

Adjust the arguments such that it's like the first one was never there. We've already parsed the first one, and doing this will make life easier later. You'll see!

tail -fn0 "${LOGFILE}" > >(

Call tail, which is a utility for writing the end of a file to standard out.

"standard out" refers to one of three standard data streams available to every process. They are "standard in" (aka stdin), "standard out" (stdout), and "standard error" (stderr). These are also known as streams 0, 1, and 2, respectively -- remember this for later! As you might guess, stdin is for input, stdout is for normal output, and stderr is for error output. The distinction between the latter two is subtle, and not all programs behave the way you want them to.

The -f option means "follow," i.e. keep watching the file and continue to output new lines as they are appended.

The -n option means "print this many lines from the end of the file." 0 means don't print anything. Paired with -f, this means to just watch for new additions to the file.

"${LOGFILE}" is a variable substitution. It takes the value of LOGFILE and supplies it to tail. tells tail that it should read the file identified by the variable LOGFILE. The curly braces are optional -- "$LOGFILE" would do the same thing, but I use them out of habit because I like consistency, and sometimes when

Once again, the double quotes around the filename protect us against spaces in the path name.

> means "to the destination on the right.

> >(...) is a Bash-specific trick that means "take the stdout of the command on the left and use it as input for a subshell." The command(s) within the subshell will receive the tail's stdout. Normally I would do this with a pipe (|), but there is a subtle difference which will become important in a moment.

A subshell is an entirely different instance of Bash, running in its own process. It inherits a snapshot of the parent shell's environment, which means $LOGFILE will remain available there.

SHORTNAME="$(basename "${LOGFILE}")"

Now, we're inside the subshell. Here, we need to initialize a couple variables before we proceed.

$(...) is a command substitution; it means to run the command within and capture its stdout for use in the surrounding expression.

Take the LOGFILE variable and pass it to the basename utility, which extracts just the filename portion. So if $LOGFILE is log/test.log, basename will output just test.log.

Store basename's output in another variable called $SHORTNAME.

Note the use of double quotes both within and around the command substitution. This ensures that we can handle spaces in the filename as well as in any directory in its path.

  BOLDBLUE=$(printf "\e[1;34m")

To use colors on a terminal, you have to output special ANSI escape sequences. These are special character sequences which tell the terminal to do something other than print a character. In addition to color, you can use these to control font weight (bold, faint), style (italic, underline, blink, strikethrough), cursor position, and more. This standard was designed to enable a surprising variety of capabilites in text-based terminals, and it requires only 7-bit ASCII to do so.

You can do the same with echo instead of printf. You need to use echo's -e switch to enable support for escaped characters, and -n to suppress a newline at the end. echo on some systems doesn't support both switches, so I always use printf.

The format of an escape sequence is as follows:

\e is the ASCII escape character, number 27 decimal (1B in hex, 033 in octal -- you can also use \033 or \x1B). This tells the terminal that an escape sequence is coming. I've read that \033 is the most portable, but have yet to hear of an issue with a script I wrote, and I find \e more readable and easier to remember.

[ is the Control String Introducer; it tells a tells the terminal that the forthcoming escape sequence is a string of Format Effectors (FE's), which are defined as part of the C1 control sequence standard. Note that support for a given control sequence will have varying levels of support depending on your terminal. Colors are quite widely supported, but some terminals support more colors than others.

1;34m is the actual control sequence. You can chain together as many as you like, separated by semicolons. After the last one, you need a final character that tells the terminal how all of those sequences should be applied. 1 means bold, and 34 means blue. The m terminates the sequence and identifies it as a Select Graphic Rendition (SGR) sequence, which is the kind we use for colors, font weights/styles, etc. The table at that link enumerates your other options.

For convenience, I always use command substitution to capture the sequences I need in variables for readability and reuse.

  PLAIN=$(printf "\e[0m")

Very much like the above. 0 means "reset" or "normal" -- it changes the font color, weight, and style back to the terminal's default.

  sed -E "s/^/${BOLDBLUE}<${SHORTNAME}>${PLAIN} /"

Take the input of the subshell (which you'll recall is coming from tail -- it's the contents of the log), and pass it to sed. sed is short for Stream EDitor, and while it's capable of much more than this, here we're going to use it to make a small change to each line of input as it comes in, output the result, and proceed to the next line.

-E puts sed in extended regular-expression mode. I said I'd assume no knowledge, so for anyone who needs a primer on regular expressions (regexp), check this out.

The portion of the line in single quotes is the command we're giving to sed. Here, we're using the s (substitute) command, which takes the format s/regexp/replacement/flags. It finds matches for regexp and replaces them with replacement. We don't need to use any flags here, but a common one is g, which means that sed should replace every occurrence rather than just the first one. (Interestingly, you don't have to use /; that's the normal choice, but any single character will do. If your regexp or your replacement contain a forward slash, things get a little hairy -- so you just switch to | or whatever instead.)

Our regular expression is ^ -- which simply matches the beginning of every line.

Our replacement is ${BOLDBLUE}<${SHORTNAME}>${PLAIN} -- the ANSI control sequence that sets the text color to bold blue, then the log file's short name surrounded by angle brackets, then the controls sequence to change back to regular text.

Note that normally, I wrap my sed commands in single quotes. When you use double quotes in shell scripts, you're telling the shell that it should process substitutions within the quotes. That can cause issues if your command includes dollar signs or some other special characters. When you use single quotes, that tells the shell that it should treat the string literally. Since I have three substitutions within my sed command, I need to use double quotes. There are no special characters present to cause trouble, so that's okay.

If there were special characters, I could do this: 's/^/'"${BOLDBLUE}<${SHORTNAME}>${PLAIN}"' /'

Here, I've wrapped the bit with the variable substitutions in double quotes, and butted those right up against the rest of the command in single quotes. It looks weird and it's hard to read, but it has the desired effect, and in this case it produces exactly the same result.

) &

The ) marks the end of the subshell, and the & says that this entire command (tail redirecting into a subshell that calls sed) should be spun off as a background process and the script should continue without waiting for it to exit.

TAIL_PID=$!

$! is a special shell variable that contains the process ID (PID) of the last-started background process. We're capturing this in a variable so that we can kill it later!

Now, remember when I said I'd explain later why I used a redirect to a subprocess instead of a pipe?

If I had written this (simplified, no subshell):

tail -fn0 "${LOGFILE}" | sed -E "s/..."
TAIL_PID=$!

Then TAIL_PID would contain the PID of sed instead of that of tail, and in fact that's exactly what happened when I tested my first attempt. StackOverflow to the rescue!

$@

$@ means "all of the arguments given to this script". Except that we called shift earlier, so it's really "all of the arguments except for the first one."

Written on its own, those arguments are treated like a command -- which they are! In my example usage, this ends up being bundle exec rspec path/to/test_spec.rb, and it runs that command normally.

EXIT_CODE=$?

$? is another special shell variable. It contains the exit status of the last command. Here, we're capturing the exit status of the command represented by $@ (so, in our case, that of the tests we ran).

An exit status of zero means success; nonzero means non-success. We're capturing it here so that we can return it as our own exit status.

Note: remember earlier when I told you about how -e aborts the script if any command fails? It does that by checking exit statuses. Any command that exits with a nonzero status is considered a failure.

kill ${TAIL_PID} >/dev/null 2>&1 || :

kill is utility that sends sends a signal to one or more processes. By default, it sends SIGTERM, which tells the process that it should shut down gracefully. Some processes don't handle some signals correctly or at all, but tail is well-behaved and will shut down promptly.

When tail exits, it closes its stdout stream. That closure passes into the subshell to sed, which sees the end of input and exits. sed is the last command in the subshell, so when sed exits, the subshell does as well.

>/dev/null means "redirect kill's own stdout to /dev/null, which is a non-destination. Anything sent to /dev/null is unceremoniously discarded.

2>&1 means redirect kill's stderr (stream 2, remember?) to its stdout (stream 1). Since we already redirected stderr to /dev/null, that means stderr will go there too. Note that the order here is important; we have to redirect stdout first. Combined, these two redirects mean that no matter what happens, kill will not write anything to the terminal. We care about this because the user might terminate the test run with Control+C. That will cause tail to exit, which means that the PID won't be found, leading kill to fail and print an error.

|| :

This is a boolean OR (||) followed by a successful (exit status 0) no-op (:). Many Ruby programmers will be familiar with the idea that you can OR a bunch of values together in order to return the first one that's truthy; we're doing the same thing here.

This part is unintuitive: in Bash, since an exit status of 0 is a success, zero is truthy, and everything else is falsy.

So, if kill fails with exit status 1, then this suffix ensures that the overall statement succeeds, and therefore that the -e option we turned on earlier doesn't consider it a failure of the script and abort.

exit ${EXIT_CODE}

Finally, we exit the script, making sure to reproduce the actual test's exit status we captured a few lines earlier.

Disclaimer: while I have a fair bit of experience writing portable scripts, I wrote this one on my Mac and have not tested it on anything else.

Upvotes: 0

the_spectator
the_spectator

Reputation: 1623

Addition to existing answers, if your requirement is to print logs in STDOUT as well as log file, then you can use Log broadcast.

One of my use-case was rake seed/tasks should print logs in STDOUT as well as log files.

Rails.logger.extend(
  ActiveSupport::Logger.broadcast(
    ActiveSupport::Logger.new($stdout)
  )
)

Upvotes: 4

Jared Beck
Jared Beck

Reputation: 17538

Tail the log as a background job (&) and it will interleave with rspec output.

tail -f log/test.log &
bundle exec rspec

When you're done with the log, bring it back into the foreground with fg and exit the tail with ctrl-c as usual.

Upvotes: 40

mario199
mario199

Reputation: 406

I had similar problem in rails 5.0. My api/request specs did not output errors into console, but into log/test.log file. Other test types were outputting errors to console normally.

Changing config.action_dispatch.show_exceptions to false fixed the problem:

# config/environments/test.rb
config.action_dispatch.show_exceptions = false

Upvotes: 1

Thomas B Homburg
Thomas B Homburg

Reputation: 1961

For Rails 4.x the log level is configured a bit different than in Rails 3.x

Add this to config/environment/test.rb

# Enable stdout logger
config.logger = Logger.new(STDOUT)

# Set log level
config.log_level = :ERROR

The logger level is set on the logger instance from config.log_level at: https://github.com/rails/rails/blob/v4.2.4/railties/lib/rails/application/bootstrap.rb#L70

Environment variable

As a bonus, you can allow overwriting the log level using an environment variable with a default value like so:

# default :ERROR
config.log_level = ENV.fetch("LOG_LEVEL", "ERROR")

And then running tests from shell:

# Log level :INFO (the value is uppercased in bootstrap.rb)
$ LOG_LEVEL=info rake test

# Log level :ERROR
$ rake test

Upvotes: 103

Phil Calvin
Phil Calvin

Reputation: 5105

For Rails 4, see this answer.

For Rails 3.x, configure a logger in config/environments/test.rb:

config.logger = Logger.new(STDOUT)
config.logger.level = Logger::ERROR

This will interleave any errors that are logged during testing to STDOUT. You may wish to route the output to STDERR or use a different log level instead.

Sending these messages to both the console and a log file requires something more robust than Ruby's built-in Logger class. The logging gem will do what you want. Add it to your Gemfile, then set up two appenders in config/environments/test.rb:

logger = Logging.logger['test']
logger.add_appenders(
    Logging.appenders.stdout,
    Logging.appenders.file('example.log')
)
logger.level = :info
config.logger = logger

Upvotes: 77

Shyam Habarakada
Shyam Habarakada

Reputation: 15795

A solution that I like, because it keeps rspec output separate from actual rails log output, is to do the following:

  • Open a second terminal window or tab, and arrange it so that you can see both the main terminal you're running rspec on as well as the new one.
  • Run a tail command in the second window so you see the rails log in the test environment. By default this can be like $ tail -f $RAILS_APP_DIR/logs/test.log or tail -f $RAILS_APP_DIR\logs\test.log for Window users
  • Run your rspec suites

If you are running a multi-pane terminal like iTerm, this becomes even more fun and you have rspec and the test.log output side by side.

Upvotes: 21

pmwood
pmwood

Reputation: 771

You can define a method in spec_helper.rb that sends a message both to Rails.logger.info and to puts and use that for debugging:

def log_test(message)
    Rails.logger.info(message)
    puts message
end

Upvotes: 8

Related Questions