Published: May 04, 2017 • Updated: December 30, 2022 • 2 min read
This week I wrote an algorithm in Ruby to convert binary numbers into decimal numbers. Here’s the problem description, from Exercism:
“Convert a binary number, represented as a string (e.g. ‘101010’), to its decimal equivalent using first principles. Implement binary to decimal conversion. Given a binary input string, your program should produce a decimal output. The program should handle invalid inputs.”
If this was a real-world Ruby problem, I’d convert from binary to decimal with
.to_i
:
> '101010'.to_i(2)
=> 42
And back to binary with .to_s
:
> 42.to_s(2)
=> '101010'
But this is an algorithm challenge, so here’s my algorithm.
class Binary
def self.to_decimal(binary)
raise ArgumentError if binary.match?(/[^01]/)
binary.reverse.chars.map.with_index do |digit, index|
digit.to_i * 2**index
end.sum
end
end
And the usage:
> Binary.to_decimal('101010')
=> 42
What’s going on here? Examining each line:
Binary
, a container for our code.ArgumentError
if the string does not match the
binary format..map
over each with
index included.I like that my solution matches for ArgumentError
positively, covering a
range of bad inputs not specified by the Exercism test cases. I like that it
reverses the string so that index can be used as the incrementing exponent. I
like that it uses .map
to return a result without an accumulator variable.
I may have golfed this down a little too much. In general, I think mathematical functions can get away with terser syntax than ordinary functions, because the principles are well-defined in another domain.
I hope this post helped you learn how convert binary to decimal and back in Ruby a few ways.
What are your thoughts on this? Let me know!
Get better at programming by learning with me! Join my 100+ subscribers receiving weekly ideas, creations, and curated resources from across the world of programming.