Elixir has some useful utility functions available in iex like h/1 which prints documentation on the given module or function/arity
pair.
You can add your own utility functions or macros by defining a utility module and then importing it into your .iex
file.
Example
defmodule MyApp.IexUtilities do
def u(id_or_username) do
MyApp.Users.find_user(id_or_username)
end
end
Import your utility module in your .iex
file in the project root
# .iex
import MyApp.IexUtilities
and the function is available in your iex session
iex> user = u "demo"
%MyApp.User{id: 42, username: "demo", name: "John Doe"}
Macros
You can take this a bit further and automatically assign it to a variable within the iex session by using a macro and an unhygienic variable. The variable defined with var!/1
will bleed out to the outer scope meaning you can type u "username"
and have the result automaitcally added to a variable, in this case user
;
defmodule MyApp.IexUtilities do
defmacro u(id_or_username) do
var!(user) = MyApp.Users.find_user(unquote(id_or_username))
end
end
and now in your iex session you can easily lookup a user to work with.
iex> u "demo"
%MyApp.User{id: 42, username: "demo", name: "John Doe"}
iex> user
%MyApp.User{id: 42, username: "demo", name: "John Doe"}
I had a GenServer that I wanted to change the state of during a hot upgrade release, so I dutifully reached for code_change/3
as per the documentation, but no matter how hard I tried, I couldnât get it to work.
I read and re-read all the documentation I could find on releases and hot upgrades and tried and tried again but my callback was never called.
I quite like Dave Thomasâ method of splitting the API from the server implementation so my code looked something like this:
defmodule MyStore do
def child_spec(opts) do
%{
id: MyStore.Server,
start: {MyStore, :start_link, [opts]},
type: :worker,
restart: :permanent,
shutdown: 500
}
end
def start_link(args \\ nil, opts \\ []) do
GenServer.start_link(MyStore.Server, args, opts)
end
def put(pid, key, value) do
GenServer.call(pid, {:put, key, value})
end
def get(pid, key) do
GenServer.call(pid, {:get, key})
end
defmodule Server do
use GenServer
require Logger
@impl true
def init(_opts) do
{:ok, []}
end
@impl true
def handle_call({:put, key, value}, _from, server_state) do
server_state = [{key, value} | server_state]
{:reply, :ok, server_state}
end
def handle_call({:get, key}, _from, server_state) do
{:reply, Keyword.get(server_state, key), server_state}
end
@vsn "1"
@impl true
def code_change(from_vsn, server_state, _extra) do
Logger.info("code_change from: #{inspect(from_vsn)}")
{:ok, server_state}
end
end
end
A very simple and contrived example of a store running on a GenServer with the obvious flaw that itâs implemented as a keyword list instead of the more obvious map. So the idea is to change the state via a hot upgrade.
Adding the following code_change/3
code before the original implementation should do the trickâalong with updating the server API to use the map.
defmodule Server do
use GenServer
require Logger
@impl true
def init(_opts) do
{:ok, %{}}
end
@impl true
def handle_call({:put, key, value}, _from, server_state) do
server_state = Map.put(server_state, key, value)
{:reply, :ok, server_state}
end
def handle_call({:get, key}, _from, server_state) do
{:reply, Map.get(server_state, key), server_state}
end
@vsn "2"
@impl true
# Ignoring downgrading for this example
def code_change("1", server_state, _extra) do
Logger.info("code_change from: #{inspect(server_state)}")
{:ok, Map.new(server_state)}
end
def code_change(from_vsn, server_state, _extra) do
Logger.info("code_change from: #{inspect(from_vsn)}")
{:ok, server_state}
end
end
All good. So have you found out whatâs wrong yet? Neither had I.
So far as I can tell, there is nothing wrong with my code. The problem isnât even visible here, it becomes apparent when you look at the supervisor and how Erlang finds the processes itâs going to run code_change/3
against.
During an application upgrade, the Release handler works through the supervision tree and pauses processes that need updating. It then runs the code_change/3
function on the module for each process and then unpauses the processes and finalises the release.
The appup file for the example above would look something like this:
{"2",
[{"1", [{update, 'Elixir.MyStore.Server', {advanced, []}}]}],
[{"1", [{update, 'Elixir.MyStore.Server', {advanced, []}}]}]
}.
That looks fine. We want the upgrade to run MyStore.Server.code_change/3
.
When the map is started under a dynamic supervisor, the response from which_children/1
is
[{:undefined, #PID<0.161.0>, :worker, [MyStore]}]
This is the same result that Erlang gets when it retrieves all supervised processes in get_supervised_procs/0
which is ââŠthe magic function. It finds all process in the system and which modules they execute as a call_back or process module.â
{:undefined, #PID<0.161.0>, :worker, [MyStore]}
is included in the results of :release_handler_1.get_supervised_procs()
(which I was super happy to find was an exported functionâthank you Erlang) and there we have the problemâ==Erlang thinks that MyStore
is the module that is being executed as the call_back or process module, not MyStore.Server
==
Because MyStore
is not listed as changing in the appup file, no code_change/3
is called on it, and because MyStore.Server
isnât listed as a module of a running process, code_change/3
isnât called on that module either and so the process is left, state unchanged, and the next call to the process will have the incorrect state and the process will crash đŁ.
After a lot of code spelunking I have identified the problem and the solution is quite a simple change: move start_link/3
into MyStore.Server
and update the child_spec accordingly.
defmodule MyStore do
def child_spec(opts) do
%{
id: MyStore.Server,
start: {MyStore.Server, :start_link, [opts]},
type: :worker,
restart: :permanent,
shutdown: 500
}
end
#...
defmodule Server do
use GenServer
require Logger
def start_link(args \\ nil, opts \\ []) do
GenServer.start_link(Server, args, opts)
end
#...
end
end
Now the output of :release_handler_1.get_supervised_procs()
looks like this:
[#...
{:undefined, #PID<0.161.0>, :worker, [MyStore.Server]}]
and code_change/3
is correctly called đ.
I always appreciate gaining a deeper understanding of how the underlying toolset of a system works and I hope that when you are searching for âwhy code_change isnât called on my GenServerâ youâll get this helpful result ;-)
Simply await on a promise which resolves after a timeout.
test("my test", async function(assert) {
// setupâŠ
await new Promise(resolve => setTimeout(resolve, 30000));
// âŠassert
});
I have often wanted to just do the following but Ectoâs Repo module doesnât have a count method.
iex> MyApp.Repo.count(MyApp.Account)
42
It is not too difficult to create a count
function that will allow you to count the results of any query.
defmodule MyApp.DBUtils do
import Ecto.Query, only: [from: 2]
@doc "Generate a select count(id) on any query"
def count(query),
do: from t in clean_query_for_count(query), select: count(t.id)
# Remove the select field from the query if it exists
defp clean_query_for_count(query),
do: Ecto.Query.exclude(query, :select)
end
This will provide a shortcut for counting any query
MyApp.DBUtils.count(MyApp.Account) |> Repo.one!
Now, to enable Repo.count
we can modify the repo module usually found in lib/my_app/repo.ex
defmodule MyApp.Repo do
use Ecto.Repo, otp_app: :my_app
def count(query),
do: MyApp.DBUtils.count(query) |> __MODULE__.one!
end
Thatâs it. This will enable a count on any query including complicated queries and those that have a select expression set.
Appending to a list in Elixir ([1] ++ [2]
) is slower than prepending and reversing
[ 2 | [1] ] |> Enum.reverse
but how bad is it?
Start by creating a new project, mix new benchmarking
and add benchfella as a dependency in your mix.exs file
defp deps do
[{:benchfella, "~> 0.3.2"}]
end
and run mix deps.get
Benchfella
benchmarks work similarly to tests. Create a directory named bench
and then create a file ending in _bench.exs
. Benchfella will find these files and run them.
Create a file bench/list_append_bench.exs
We will write our functions in the bench file but you can reference functions in another module to benchmark your project code.
This benchmark will test three different ways to build a list, (1) append each element to the list using ++
, (2) build up the list using a recursive tail where the element is added to the head but the tail is built up recursively, and (3) prepending the element to a list accumulator and then reversing the list at the end.
defmodule ListAppendBench do
use Benchfella
@length 1_000
# First bench mark
bench "list1 ++ list2" do
build_list_append(1, @length)
end
# Second bench mark
bench "[head | recurse ]" do
build_list_recursive_tail(1, @length)
end
# Third bench mark
bench "[head | tail] + Enum.reverse" do
build_list_prepend(1, @length)
end
@doc """
Build a list of numbers from `num` to `total` by appending each item
to the end of the list
"""
def build_list_append(num, total, acc \\ [])
def build_list_append(total, total, acc), do: acc
def build_list_append(num, total, acc) do
acc = acc ++ [num]
next_num = num + 1
build_list_append(next_num, total, acc)
end
@doc """
Build a list of numbers from `num` to `total` by building
the list with a recursive tail instead of using an accumulator
"""
def build_list_recursive_tail(total, total), do: []
def build_list_recursive_tail(num, total) do
[ num | build_list_recursive_tail(num + 1, total) ]
end
@doc """
Build a list of numbers from `num` to `total` by prepending each item
and reversing the list at the end
"""
def build_list_prepend(num, total, acc \\ [])
def build_list_prepend(total, total, acc), do: Enum.reverse(acc)
def build_list_prepend(num, total, acc) do
acc = [num | acc]
next_num = num + 1
build_list_prepend(next_num, total, acc)
end
end
Run the benchmark with mix bench
and you see the results,
Settings:
duration: 1.0 s
## ListAppendBench
[10:15:32] 1/3: list1 ++ list2
[10:15:34] 2/3: [head | tail] + Enum.reverse
[10:15:37] 3/3: [head | recurse ]
Finished in 6.66 seconds
## ListAppendBench
[head | tail] + Enum.reverse 100000 20.87 ”s/op
[head | recurse ] 100000 21.25 ”s/op
list1 ++ list2 500 3228.16 ”s/op
The results: prepending to a list and reversing it is 200 times faster than appending and only fractionally faster than building the tail recursively.
For more complex benchmarks, Benchfella
has various hooks for test setup and teardown.
It also has ability to compare benchmark runs with mix bench.cmp
and graph the results with mix bench.graph
.
A quick tip to make it easier to use Dead Man's Snitch with the whenever gem
Whenever
is a great gem for managing cron jobs.
Dead Manâs Snitch
is a fantastic and useful tool for making sure those cron jobs actually run when they should.
Whenever includes a number of predefined job types which can be overridden to include snitch support.
The job_type
command allows you to register a job type. It takes a name and a string representing the command. Within the command string, anything that begins with
:
is replaced with the value from the jobs options hash. Sounds complicated but is in fact quite easy.
Include the whenever
gem in your Gemfile and then run
$ bundle exec wheneverize
This will create a file, config/schedule.rb
. Insert these lines at the top of your config file, I have mine just below set :output
.
These lines add && curl https://nosnch.in/:snitch
to each job type just before :output
.
job_type :command, "cd :path && :task && curl https://nosnch.in/:snitch :output"
job_type :rake, "cd :path && :environment_variable=:environment bin/rake :task --silent && curl https://nosnch.in/:snitch :output"
job_type :runner, "cd :path && bin/rails runner -e :environment ':task' && curl https://nosnch.in/:snitch :output"
job_type :script, "cd :path && :environment_variable=:environment bundle exec script/:task && curl https://nosnch.in/:snitch :output"
Now add your job to the schedule. A simple rake task would like this:
every 1.day, roles: [:app] do
rake "log:clear"
end
Now itâs time to create the snitch. You can grab a free account at
deadmanssnitch.com
and add a new snitch.
Then, once thatâs saved, youâll see a screen with your snitch URL. All you need to do is copy the hex code at the end.
Use that hex code in your whenever job as follows:
every 1.day, roles: [:app] do
rake "log:clear", snitch: "06ebef375f"
end
Now deploy and update your whenverized cron job. DMS
will let you know as soon as your job runs for the first time so you know it has begun to work. After that, theyâll only let you know if it fails to check in.
Tip:
For best tracking, you want your DMS
job to check in just before the end of the period youâre monitoring (in the above example 1 day). To do that, I revert to cron syntax in whenever and set my job up as:
# Assuming your server time zone is set to UTC
every "59 23 * * *", roles: [:app] do
rake "log:clear", snitch: "06ebef375f"
end
See Does it matter when I ping a snitch?. Remember to allow time for the job to run and complete.
For more information, read through the full DMS FAQ
Iâve found a number of times where I have needed to iterate over a hash and modify the values. The most recent was stripping excess spaces from the values of a Rails params hash.
The only way I know of doing this is:
hash = {one: " one ", two: "two "}
hash.each do |key, value|
hash[key] = value.strip!
end
#=> {:one=>âoneâ, :two=>âtwoâ}
This is a lot less elegant than using map
on an Array
[" one ", "two "].map(&:strip!)
#=> ["one", "two"]
I wanted something like #map
for a Hash
So I came up with Hash#clean
(this is a monkey patch so exercise with caution)
class Hash
def clean(&block)
each { |key, value|
self[key] = yield(value)
}
end
end
Now itâs as easy as,
{one: " one ", two: "two "}.clean(&:strip!)
#=> {:one=>"one", :two=>"two"}
Now I can easily sanitise Rails parameter hashes
def model_params
params.require(:model).permit(:name, :email, :phone).clean(&:strip!)
end
I recently had an import job failing because it took too long. When I had a look at the file I saw that there were 74
useful
lines but a total of 1,044,618
lines in the file (My guess is MS Excel having a little fun with us).
Most of the lines were simply rows of commas:
Row,Of,Headers
some,valid,data
,,
,,
,,
,,
,,
The CSV library has an option named skip_blanks
but the documentation says âNote that this setting will not skip rows that contain column separators, even if the rows contain no actual dataâ, so thatâs not actually helpful in this case.
What is needed is skip_lines
with a regular expression that will match any lines with just column separators (/^(?:,\s*)+$/
).
The resulting code looks like this:
require 'csv'
CSV.foreach('/tmp/tmp.csv',
headers: true,
skip_blanks: true,
skip_lines: /^(?:,\s*)+$/) do |row|
puts row.inspect
end
#<CSV::Row "Row":"some" "Of":"valid" "Headers":"data">
#=> nil
I wonât cover all the boiler plate code but you can view that at
JSFiddle
The project is a ListItem
model and a corresponding ListCollection
. There is a
ListItemView
which is compiled into a ListView
to create an ordered list. There is a FormView
used for adding items to the collection.
The first component of our code is the comparator in the collection which keeps the list sorted by name.
var ListCollection = Backbone.Collection.extend({
model: ListItem,
comparator: function(item) {
return item.get('name').toLowerCase();
}
});
With this a simple render method will always have the list in order but it needs to redraw the list every time the collection is updated.
Simply bind the add
event to this.render
and youâre done.
//...
initialize: function() {
this.listenTo(this.collection, 'add', this.render);
},
render: function() {
var items = [];
this.collection.each(function(item) {
items.push((new ListItemView({model: item})).render().el);
});
this.$el.html(items);
return this;
}
//...
What if we have a list that is more complicated or we want to display the item being added. For this we need a couple of things.
- Split the creation of the item view out into its own factory method
-
Call the factory method when building the initial list within
render
-
Create a new
addItem
method which will append the item to the list
- Change our event binding to
this.addItem
//...
initialize: function() {
this.listenTo(this.collection, 'add', this.addItem);
},
render: function() {
var self = this;
var items = [];
this.collection.each(function(item) {
items.push(self.buildItemView(item).render().el);
});
this.$el.html(items);
return this;
},
addItem: function(item) {
var $view = this.buildItemView(item).render().$el;
this.$el.append($view.hide().fadeIn());
},
buildItemView: function(item) {
return new ListItemView({model: item});
}
//...
The problem now is that weâre using jQueryâs append
which adds the item view to the end of the list negating the work of the comparator in our Backbone collection. What we need now is a way to insert the new item into the list at the correct index. For that weâll need at add an
insertAt
method to jQuery.
This new method will take an index and an element and it will place it into the childNodes collection at the correct index.
$.fn.extend({
insertAt: function(index, element) {
var lastIndex = this.children().size();
if(index < lastIndex) {
this.children().eq(index).before(element);
} else {
this.append(element);
}
return this;
}
});
Now we can update our addItem
method to calculate the index of the new item and then add it into the list at that index.
//...
addItem: function(item) {
// Get the index of the newly added item
var index = this.collection.indexOf(item);
// Build a view for the item
var $view = this.buildItemView(item).render().$el;
// Insert the view at the same index in the list
this.$el.insertAt(index, $view.hide().fadeIn());
}
//...
The final working product is embedded here:
An overview of how Fibers work in Ruby
Fibers are code blocks that can be paused and resumed. They are unlike threads because they never run concurrently. The programmer is in complete control of when a fiber is run. Because of this we can create two fibers and pass control between them.
Control is passed to a fiber when you call Fiber#resume, the Fiber returns control by calling
Fiber.yield
fiber = Fiber.new do
Fiber.yield 'one'
Fiber.yield 'two'
end
puts fiber.resume
#=> one
puts fiber.resume
#=> two
The above example shows the most common use case where Fiber.yield
is passed an argument which is returned through Fiber#resume.
Whatâs interesting is that you can pass an argument into the fiber via Fiber#resume
as well. The first call to Fiber#resume
starts the fiber and that argument goes to the block that creates the fiber, all subsequent calls to
Fiber#resume
have their arguments passed to Fiber.yield.
fiber = Fiber.new do |arg|
puts arg # prints 'one'
puts Fiber.yield('two') # prints 'three'
puts Fiber.yield('four') # prints 'five'
end
puts fiber.resume('one') # prints 'two'
#=> one
#=> two
puts fiber.resume('three') # prints 'four'
#=> three
#=> four
puts fiber.resume('five') # prints nil because there's no corresponding yield and the fiber exits
#=> nil
Armed with this information, we can setup two fibers and get them to communicate between each other.
require 'fiber'
fiber2 = nil
fiber1 = Fiber.new do
puts fiber2.resume # start fiber2 and print first result (1)
puts fiber2.resume 2 # send second number and print second result (3)
fiber2.resume 4 # send forth number, print nothing and exit
end
fiber2 = Fiber.new do
puts Fiber.yield 1 # send first number and print returned result (2)
puts Fiber.yield 3 # send third number, print returned result (4) and exit
end
fiber1.resume # start fiber1
#=> 1
#=> 2
#=> 3
#=> 4
puts "fiber1 done" unless fiber1.alive?
#=> fiber1 done
puts "fiber2 done" unless fiber2.alive?
#=> fiber2 done
EachGroup module
Knowing we can send information between two fibers with alternating calls of
Fiber#resume
and Fiber.yield, we have the building blocks to tackle a streaming #each_group method.
Tip:
The fiber you first call #resume
on should always call #resume
on the fiber it is communicating with. The other thread then always calls Fiber.yield. This goes against the natural inclination to pass information with
Fiber.yield
as in the first example above. Because of how the two fibers are setup below, youâll see that no information is passed with Fiber.yield, information is only passed using
Fiber#resume
—confusing, I know.
# -*- coding: utf-8 -*-
require 'fiber'
module EachGroup
def each_group(*fields, &block)
grouper = Grouper.new(*fields, &block)
loop_fiber = Fiber.new do
each do |result|
grouper.process_result(result)
end
end
loop_fiber.resume
end
class Grouper
def initialize(*fields, &block)
@current_group = nil
@fields = fields
@block = block
end
attr_reader :fields, :block
attr_accessor :current_group
def process_result(result)
group_fiber = get_group_fiber(result)
group_fiber.resume(result) if group_fiber.alive?
end
private
def get_group_fiber(result)
group_value = fields.map{|f| result.public_send(f) }
unless current_group == group_value
self.current_group = group_value
create_group_fiber(result, group_value)
end
@group_fiber
end
def create_group_fiber(result, group_value)
@group_fiber = Fiber.new do |first_result|
group = Group.new(group_value)
block.call(group)
end
@group_fiber.resume(nil) # Start the fiber and wait for its first yield
end
end
class Group
def initialize(value)
@value = value
end
attr_reader :value
def each(&block)
while result = Fiber.yield
block.call(result)
end
end
end
end
Example Usage
#each_group requires input sorted for grouping.
require 'each_group'
require 'ostruct'
Array.send(:include, EachGroup)
array = [
OpenStruct.new(year: 2014, month: 1, date: 1),
OpenStruct.new(year: 2014, month: 1, date: 3),
OpenStruct.new(year: 2014, month: 2, date: 5),
OpenStruct.new(year: 2014, month: 2, date: 7),
]
array.each_group(:year, :month) do |group|
puts group.value.inspect
group.each do |obj|
puts " #{obj.date}"
end
end
#=> [2014, 1]
#=> 1
#=> 3
#=> [2014, 2]
#=> 5
#=> 7
This code can be used with ActiveRecord as follows:
ActiveRecord::Relation.send(:include, EachGroup)
Model.order('year, month').each_group do |group|
group.each do
# ...
end
end
I have uploaded a Gist
that shows a previous iteration of the EachGroup module using a nested loop which you may find easier to use to understand how the fibers are used to control the flow of the loop.
-
The above code with a RSpec spec -
https://gist.github.com/andrewtimberlake/9462561
-
The original code with nested loops -
https://gist.github.com/andrewtimberlake/9462561/f0e88cd310614a34693d57c3fc759f5c78e3a264
Thanks for taking the time to read through this. Explaining complicated concepts like Fibers is a challenge, please leave a comment and let me know if this was helpful or if you still have any questions.
Next page