Pause tests in Ember
Simply await on a promise which resolves after a timeout.
test("my test", async function(assert) {
// setup…
await new Promise(resolve => setTimeout(resolve, 30000));
// …assert
});
Simply await on a promise which resolves after a timeout.
test("my test", async function(assert) {
// setup…
await new Promise(resolve => setTimeout(resolve, 30000));
// …assert
});
I have often wanted to just do the following but Ecto’s Repo module doesn’t have a count method.
iex> MyApp.Repo.count(MyApp.Account)
42
It is not too difficult to create a count
function that will allow you to count the results of any query.
defmodule MyApp.DBUtils do
import Ecto.Query, only: [from: 2]
@doc "Generate a select count(id) on any query"
def count(query),
do: from t in clean_query_for_count(query), select: count(t.id)
# Remove the select field from the query if it exists
defp clean_query_for_count(query),
do: Ecto.Query.exclude(query, :select)
end
This will provide a shortcut for counting any query
MyApp.DBUtils.count(MyApp.Account) |> Repo.one!
Now, to enable Repo.count
we can modify the repo module usually found in lib/my_app/repo.ex
defmodule MyApp.Repo do
use Ecto.Repo, otp_app: :my_app
def count(query),
do: MyApp.DBUtils.count(query) |> __MODULE__.one!
end
That’s it. This will enable a count on any query including complicated queries and those that have a select expression set.
Appending to a list in Elixir ([1] ++ [2]
) is slower than prepending and reversing [ 2 | [1] ] |> Enum.reverse
but how bad is it?
Start by creating a new project, mix new benchmarking
and add benchfella as a dependency in your mix.exs file
defp deps do
[{:benchfella, "~> 0.3.2"}]
end
and run mix deps.get
Benchfella benchmarks work similarly to tests. Create a directory named bench
and then create a file ending in _bench.exs
. Benchfella will find these files and run them.
Create a file bench/list_append_bench.exs
We will write our functions in the bench file but you can reference functions in another module to benchmark your project code.
This benchmark will test three different ways to build a list, (1) append each element to the list using ++
, (2) build up the list using a recursive tail where the element is added to the head but the tail is built up recursively, and (3) prepending the element to a list accumulator and then reversing the list at the end.
defmodule ListAppendBench do
use Benchfella
@length 1_000
# First bench mark
bench "list1 ++ list2" do
build_list_append(1, @length)
end
# Second bench mark
bench "[head | recurse ]" do
build_list_recursive_tail(1, @length)
end
# Third bench mark
bench "[head | tail] + Enum.reverse" do
build_list_prepend(1, @length)
end
@doc """
Build a list of numbers from `num` to `total` by appending each item
to the end of the list
"""
def build_list_append(num, total, acc \\ [])
def build_list_append(total, total, acc), do: acc
def build_list_append(num, total, acc) do
acc = acc ++ [num]
next_num = num + 1
build_list_append(next_num, total, acc)
end
@doc """
Build a list of numbers from `num` to `total` by building
the list with a recursive tail instead of using an accumulator
"""
def build_list_recursive_tail(total, total), do: []
def build_list_recursive_tail(num, total) do
[ num | build_list_recursive_tail(num + 1, total) ]
end
@doc """
Build a list of numbers from `num` to `total` by prepending each item
and reversing the list at the end
"""
def build_list_prepend(num, total, acc \\ [])
def build_list_prepend(total, total, acc), do: Enum.reverse(acc)
def build_list_prepend(num, total, acc) do
acc = [num | acc]
next_num = num + 1
build_list_prepend(next_num, total, acc)
end
end
Run the benchmark with mix bench
and you see the results,
Settings:
duration: 1.0 s
## ListAppendBench
[10:15:32] 1/3: list1 ++ list2
[10:15:34] 2/3: [head | tail] + Enum.reverse
[10:15:37] 3/3: [head | recurse ]
Finished in 6.66 seconds
## ListAppendBench
[head | tail] + Enum.reverse 100000 20.87 µs/op
[head | recurse ] 100000 21.25 µs/op
list1 ++ list2 500 3228.16 µs/op
The results: prepending to a list and reversing it is 200 times faster than appending and only fractionally faster than building the tail recursively.
For more complex benchmarks, Benchfella has various hooks for test setup and teardown.
It also has ability to compare benchmark runs with mix bench.cmp
and graph the results with mix bench.graph
.
TL;DR, All the code can be found here
Sometimes, when you want complete control, you want to be able to install packages from source and still use an automated tool like Ansible to do that.
A simple set of tasks can check for the existence of files to eliminate the need for running tasks that are already complete but that doesn’t help us with making sure we have the correct version installed.
I’m going to walk through creating a play that will build ruby from source. It will not do any work if ruby is already installed and is already the correct version. If not correct, it will:
A first pass can be found in this gist
If repeated, this build will re-download the archive, extract it, configure it and make it. It won’t install the binary again because it checks for the existence of the file /usr/local/bin/ruby
but other than that, all tasks will re-run.
The first step is to create a task that will determine the installed ruby version if present.
- name: Get installed ruby version
command: ruby --version # Run this command
ignore_errors: true # We don’t want and error in this command to cause the task to fail
changed_when: false
failed_when: false
register: ruby_installed_version # Register a variable with the result of the command
This task will run ruby --version
but will silently fail if ruby is not installed. If ruby is installed, then it registers the version string in a variable named ruby_installed_version
.
The next step is to create a variable we can use to test whether to build ruby or not. This is set in our global_vars to a default of false. Then add a task that will set that variable to true if the version string doesn’t match.
- name: Force install if the version numbers do not match
set_fact:
ruby_reinstall_from_source: true
when: '(ruby_installed_version|success and (ruby_installed_version.stdout | regex_replace("^.*?([0-9\.]+).*$", "\\1") | version_compare(ruby_version, operator="!=")))'
Now we can add a when
clause to all our other tasks. This will skip the task if ruby is correctly installed. That can be seen in this gist
The when clause checks for two things, (1) the task which checked the ruby version failed (i.e. there is no ruby installed) or (2) the ruby_reinstall_from_source
variable is true (i.e. the versions don’t match).
An example task with the when clause:
- name: Download Ruby
when: ruby_installed_version|failed or ruby_reinstall_from_source
get_url:
url: "https://cache.ruby-lang.org/pub/ruby/2.3/ruby-{{ruby_version}}.tar.gz"
dest: "/tmp/ruby-{{ruby_version}}.tar.gz"
sha256sum: "{{ruby_sha256sum}}"
# …
We now have a conditional on every test. That seems a bit redundant. This can be improved by using the block syntax. By using a block we can check the condition once, and then run or skip the whole installation in one move.
- when: ruby_installed_version|failed or ruby_reinstall_from_source
block:
- name: Download Ruby
when: ruby_installed_version|failed or ruby_reinstall_from_source
get_url:
url: "https://cache.ruby-lang.org/pub/ruby/2.3/ruby-{{ruby_version}}.tar.gz"
dest: "/tmp/ruby-{{ruby_version}}.tar.gz"
sha256sum: "{{ruby_sha256sum}}"
# …
The final code can be found in this gist, https://gist.github.com/andrewtimberlake/802bd8d285b3e18c5ebe, where you can walk through the three revisions as outlined in the article.
A quick tip to make it easier to use Dead Man's Snitch with the whenever gem
Whenever is a great gem for managing cron jobs. Dead Man’s Snitch is a fantastic and useful tool for making sure those cron jobs actually run when they should.
Whenever includes a number of predefined job types which can be overridden to include snitch support.
The job_type
command allows you to register a job type. It takes a name and a string representing the command. Within the command string, anything that begins with :
is replaced with the value from the jobs options hash. Sounds complicated but is in fact quite easy.
Include the whenever
gem in your Gemfile and then run
$ bundle exec wheneverize
This will create a file, config/schedule.rb
. Insert these lines at the top of your config file, I have mine just below set :output
.
These lines add && curl https://nosnch.in/:snitch
to each job type just before :output
.
job_type :command, "cd :path && :task && curl https://nosnch.in/:snitch :output"
job_type :rake, "cd :path && :environment_variable=:environment bin/rake :task --silent && curl https://nosnch.in/:snitch :output"
job_type :runner, "cd :path && bin/rails runner -e :environment ':task' && curl https://nosnch.in/:snitch :output"
job_type :script, "cd :path && :environment_variable=:environment bundle exec script/:task && curl https://nosnch.in/:snitch :output"
Now add your job to the schedule. A simple rake task would like this:
every 1.day, roles: [:app] do
rake "log:clear"
end
Now it’s time to create the snitch. You can grab a free account at deadmanssnitch.com and add a new snitch.
Then, once that’s saved, you’ll see a screen with your snitch URL. All you need to do is copy the hex code at the end.
Use that hex code in your whenever job as follows:
every 1.day, roles: [:app] do
rake "log:clear", snitch: "06ebef375f"
end
Now deploy and update your whenverized cron job. DMS will let you know as soon as your job runs for the first time so you know it has begun to work. After that, they’ll only let you know if it fails to check in.
Tip: For best tracking, you want your DMS job to check in just before the end of the period you’re monitoring (in the above example 1 day). To do that, I revert to cron syntax in whenever and set my job up as:
# Assuming your server time zone is set to UTC
every "59 23 * * *", roles: [:app] do
rake "log:clear", snitch: "06ebef375f"
end
See Does it matter when I ping a snitch?. Remember to allow time for the job to run and complete. For more information, read through the full DMS FAQ
I’ve found a number of times where I have needed to iterate over a hash and modify the values. The most recent was stripping excess spaces from the values of a Rails params hash.
The only way I know of doing this is:
hash = {one: " one ", two: "two "}
hash.each do |key, value|
hash[key] = value.strip!
end
#=> {:one=>“one”, :two=>“two”}
This is a lot less elegant than using map
on an Array
[" one ", "two "].map(&:strip!)
#=> ["one", "two"]
I wanted something like #map
for a Hash
So I came up with Hash#clean
(this is a monkey patch so exercise with caution)
class Hash
def clean(&block)
each { |key, value|
self[key] = yield(value)
}
end
end
Now it’s as easy as,
{one: " one ", two: "two "}.clean(&:strip!)
#=> {:one=>"one", :two=>"two"}
Now I can easily sanitise Rails parameter hashes
def model_params
params.require(:model).permit(:name, :email, :phone).clean(&:strip!)
end
I quickly drew out the graph from the video on determining great feature fit. What you're looking for is features that will be used by all your users all of the time.
I use a large 27" iMac which I divide up windows with a browser in the top right of the screen. One thing that often frustrated me is that I could not maximise a video to fill the window completely. I had to fill my entire screen or watch it in the embedded size.
It turns out this is not too hard, change the URL in the browser from https://www.youtube.com/watch?v=oHg5SJYRHA0 to https://www.youtube.com/embed/oHg5SJYRHA0
I recently had an import job failing because it took too long. When I had a look at the file I saw that there were 74 useful lines but a total of 1,044,618 lines in the file (My guess is MS Excel having a little fun with us).
Most of the lines were simply rows of commas:
Row,Of,Headers
some,valid,data
,,
,,
,,
,,
,,
The CSV library has an option named skip_blanks
but the documentation says “Note that this setting will not skip rows that contain column separators, even if the rows contain no actual data”, so that’s not actually helpful in this case.
What is needed is skip_lines
with a regular expression that will match any lines with just column separators (/^(?:,\s*)+$/
).
The resulting code looks like this:
require 'csv'
CSV.foreach('/tmp/tmp.csv',
headers: true,
skip_blanks: true,
skip_lines: /^(?:,\s*)+$/) do |row|
puts row.inspect
end
#<CSV::Row "Row":"some" "Of":"valid" "Headers":"data">
#=> nil
I won’t cover all the boiler plate code but you can view that at JSFiddle
The project is a ListItem
model and a corresponding ListCollection
. There is a ListItemView
which is compiled into a ListView
to create an ordered list. There is a FormView
used for adding items to the collection.
The first component of our code is the comparator in the collection which keeps the list sorted by name.
var ListCollection = Backbone.Collection.extend({
model: ListItem,
comparator: function(item) {
return item.get('name').toLowerCase();
}
});
With this a simple render method will always have the list in order but it needs to redraw the list every time the collection is updated.
Simply bind the add
event to this.render
and you’re done.
//...
initialize: function() {
this.listenTo(this.collection, 'add', this.render);
},
render: function() {
var items = [];
this.collection.each(function(item) {
items.push((new ListItemView({model: item})).render().el);
});
this.$el.html(items);
return this;
}
//...
What if we have a list that is more complicated or we want to display the item being added. For this we need a couple of things.
render
addItem
method which will append the item to the list
this.addItem
//...
initialize: function() {
this.listenTo(this.collection, 'add', this.addItem);
},
render: function() {
var self = this;
var items = [];
this.collection.each(function(item) {
items.push(self.buildItemView(item).render().el);
});
this.$el.html(items);
return this;
},
addItem: function(item) {
var $view = this.buildItemView(item).render().$el;
this.$el.append($view.hide().fadeIn());
},
buildItemView: function(item) {
return new ListItemView({model: item});
}
//...
The problem now is that we’re using jQuery’s append
which adds the item view to the end of the list negating the work of the comparator in our Backbone collection. What we need now is a way to insert the new item into the list at the correct index. For that we’ll need at add an insertAt
method to jQuery.
This new method will take an index and an element and it will place it into the childNodes collection at the correct index.
$.fn.extend({
insertAt: function(index, element) {
var lastIndex = this.children().size();
if(index < lastIndex) {
this.children().eq(index).before(element);
} else {
this.append(element);
}
return this;
}
});
Now we can update our addItem
method to calculate the index of the new item and then add it into the list at that index.
//...
addItem: function(item) {
// Get the index of the newly added item
var index = this.collection.indexOf(item);
// Build a view for the item
var $view = this.buildItemView(item).render().$el;
// Insert the view at the same index in the list
this.$el.insertAt(index, $view.hide().fadeIn());
}
//...
The final working product is embedded here:
If you want to view the SQL query used to construct the information returned from a psql command (which will help you learn the underlying information schema) then type \set ECHO_HIDDEN
$ psql test
psql (9.4.1)
Type "help" for help.
test=# \set ECHO_HIDDEN
test=# \dt
********* QUERY **********
SELECT n.nspname as "Schema",
c.relname as "Name",
CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'm' THEN 'materialized view' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' WHEN 'f' THEN 'foreign table' END as "Type",
pg_catalog.pg_get_userbyid(c.relowner) as "Owner"
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','')
AND n.nspname <> 'pg_catalog'
AND n.nspname <> 'information_schema'
AND n.nspname !~ '^pg_toast'
AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 1,2;
**************************
List of relations
Schema | Name | Type | Owner
--------+------+-------+--------
public | temp | table | andrew
(1 row)
I recently had a requirement where I needed an account to have zero, one or two actions associated with it. One could be a single action and the other could be one of many repeating types. I didn’t want two single actions and I didn’t want two or more types of repeating actions. To solve this I used two partial indexes to split the data set and apply a unique constraint to each set.
CREATE TABLE accounts (
id integer NOT NULL,
name text NOT NULL
);
CREATE TABLE actions (
id integer NOT NULL,
account_id integer NOT NULL,
repeat_type text NOT NULL DEFAULT 'none'
);
INSERT INTO accounts (id, name) VALUES (1, 'Test 1'), (2, 'Test 2');
If I create a unique index on actions(account_id) then I will only be able to have a single action per account.
CREATE UNIQUE INDEX idx_unique_accounts ON actions(account_id);
INSERT INTO actions (id, account_id, repeat_type) VALUES (1, 1, 'none');
-- INSERT 0 1
INSERT INTO actions (id, account_id, repeat_type) VALUES (1, 1, 'weekly');
-- ERROR: duplicate key value violates unique constraint "idx_unique_accounts"
-- DETAIL: Key (account_id)=(1) already exists.
DROP INDEX idx_unique_accounts;
The solution is to create two partial indexes, one for the single action and one for the repeating action.
TRUNCATE TABLE actions;
CREATE UNIQUE INDEX idx_unique_single_actions ON actions(account_id) WHERE (repeat_type = 'none');
CREATE UNIQUE INDEX idx_unique_repeating_actions ON actions(account_id) WHERE (repeat_type != 'none');
INSERT INTO actions (id, account_id, repeat_type) VALUES (1, 1, 'none');
-- INSERT 0 1
INSERT INTO actions (id, account_id, repeat_type) VALUES (1, 1, 'weekly');
-- INSERT 0 1
Now inserting another single action will result in an error.
INSERT INTO actions (id, account_id, repeat_type) VALUES (1, 1, 'none');
-- ERROR: duplicate key value violates unique constraint "idx_unique_single_actions"
-- DETAIL: Key (account_id)=(1) already exists.
Or inserting another repeating action, even of a different repeat type, will result in an error.
(sql)
INSERT INTO actions (id, account_id, repeat_type) VALUES (1, 1, 'monthly');
-- ERROR: duplicate key value violates unique constraint "idx_unique_repeating_actions"
-- DETAIL: Key (account_id)=(1) already exists.
Fibers are code blocks that can be paused and resumed. They are unlike threads because they never run concurrently. The programmer is in complete control of when a fiber is run. Because of this we can create two fibers and pass control between them.
Control is passed to a fiber when you call Fiber#resume, the Fiber returns control by calling Fiber.yield
fiber = Fiber.new do
Fiber.yield 'one'
Fiber.yield 'two'
end
puts fiber.resume
#=> one
puts fiber.resume
#=> two
The above example shows the most common use case where Fiber.yield is passed an argument which is returned through Fiber#resume. What’s interesting is that you can pass an argument into the fiber via Fiber#resume as well. The first call to Fiber#resume starts the fiber and that argument goes to the block that creates the fiber, all subsequent calls to Fiber#resume have their arguments passed to Fiber.yield.
fiber = Fiber.new do |arg|
puts arg # prints 'one'
puts Fiber.yield('two') # prints 'three'
puts Fiber.yield('four') # prints 'five'
end
puts fiber.resume('one') # prints 'two'
#=> one
#=> two
puts fiber.resume('three') # prints 'four'
#=> three
#=> four
puts fiber.resume('five') # prints nil because there's no corresponding yield and the fiber exits
#=> nil
Armed with this information, we can setup two fibers and get them to communicate between each other.
require 'fiber'
fiber2 = nil
fiber1 = Fiber.new do
puts fiber2.resume # start fiber2 and print first result (1)
puts fiber2.resume 2 # send second number and print second result (3)
fiber2.resume 4 # send forth number, print nothing and exit
end
fiber2 = Fiber.new do
puts Fiber.yield 1 # send first number and print returned result (2)
puts Fiber.yield 3 # send third number, print returned result (4) and exit
end
fiber1.resume # start fiber1
#=> 1
#=> 2
#=> 3
#=> 4
puts "fiber1 done" unless fiber1.alive?
#=> fiber1 done
puts "fiber2 done" unless fiber2.alive?
#=> fiber2 done
Knowing we can send information between two fibers with alternating calls of Fiber#resume and Fiber.yield, we have the building blocks to tackle a streaming #each_group method. Tip: The fiber you first call #resume on should always call #resume on the fiber it is communicating with. The other thread then always calls Fiber.yield. This goes against the natural inclination to pass information with Fiber.yield as in the first example above. Because of how the two fibers are setup below, you’ll see that no information is passed with Fiber.yield, information is only passed using Fiber#resume —confusing, I know.
# -*- coding: utf-8 -*-
require 'fiber'
module EachGroup
def each_group(*fields, &block)
grouper = Grouper.new(*fields, &block)
loop_fiber = Fiber.new do
each do |result|
grouper.process_result(result)
end
end
loop_fiber.resume
end
class Grouper
def initialize(*fields, &block)
@current_group = nil
@fields = fields
@block = block
end
attr_reader :fields, :block
attr_accessor :current_group
def process_result(result)
group_fiber = get_group_fiber(result)
group_fiber.resume(result) if group_fiber.alive?
end
private
def get_group_fiber(result)
group_value = fields.map{|f| result.public_send(f) }
unless current_group == group_value
self.current_group = group_value
create_group_fiber(result, group_value)
end
@group_fiber
end
def create_group_fiber(result, group_value)
@group_fiber = Fiber.new do |first_result|
group = Group.new(group_value)
block.call(group)
end
@group_fiber.resume(nil) # Start the fiber and wait for its first yield
end
end
class Group
def initialize(value)
@value = value
end
attr_reader :value
def each(&block)
while result = Fiber.yield
block.call(result)
end
end
end
end
#each_group requires input sorted for grouping.
require 'each_group'
require 'ostruct'
Array.send(:include, EachGroup)
array = [
OpenStruct.new(year: 2014, month: 1, date: 1),
OpenStruct.new(year: 2014, month: 1, date: 3),
OpenStruct.new(year: 2014, month: 2, date: 5),
OpenStruct.new(year: 2014, month: 2, date: 7),
]
array.each_group(:year, :month) do |group|
puts group.value.inspect
group.each do |obj|
puts " #{obj.date}"
end
end
#=> [2014, 1]
#=> 1
#=> 3
#=> [2014, 2]
#=> 5
#=> 7
This code can be used with ActiveRecord as follows:
ActiveRecord::Relation.send(:include, EachGroup)
Model.order('year, month').each_group do |group|
group.each do
# ...
end
end
I have uploaded a Gist that shows a previous iteration of the EachGroup module using a nested loop which you may find easier to use to understand how the fibers are used to control the flow of the loop.
Thanks for taking the time to read through this. Explaining complicated concepts like Fibers is a challenge, please leave a comment and let me know if this was helpful or if you still have any questions.
I’m working on an app that creates user accounts and (optionally) subscribes users to our mailing list. Because I’m handling user creation in my app, I need some way to add them to the mailing list which is hosted on MailChimp. To do this, I am using their API to send through subscriber information.
The documentation for the ruby gem is not great. You have a few choices:
Below is some sample code that will get you started.
> gem install mailchimp-api
# or
> echo 'gem "mailchimp-api", require: false' >> Gemfile
> bundle install
In MailChimp, go to your account settings page, click Extras and API Keys. If you don’t have an API key yet, click Create A Key.
Every list has a unique ID which is needed to add subscribers to the correct list. Got to Lists, Click on your list name, Click Settings and List name & defaults. On the right you’ll see your List ID (a 10 character hex code).
require 'mailchimp' # The gem name is mailchimp-api but you require mailchimp
module MailChimpSubscription
# These should prabably be environment variables or configuration variables
MAIL_CHIMP_API_KEY = "0000000001234567890_us1"
MAIL_CHIMP_LIST_ID = "abcdef1234"
extend self
def subscribe(user)
mail_chimp.lists.subscribe(MAIL_CHIMP_LIST_ID,
# The email field is a struct that can use an
# email address or two MailChimp specific list ids (see API docs)
{email: user.email},
# Set your merge vars here
{'FNAME' => user.first_name, 'LNAME' => user.last_name})
rescue Mailchimp::ListAlreadySubscribedError
# Decide what to do if the user is already subscribed
rescue Mailchimp::ListDoesNotExistError => e
# This is definitely a problem I want to know about
raise e
rescue Mailchimp::Error => e
# Unforeseen errors that need to be dealt with
end
private
def mail_chimp
@mail_chimp ||= Mailchimp::API.new(MAIL_CHIMP_API_KEY)
end
end
To use this module, you pass in a user object that responds to #email, #first_name and #last_name
user = OpenStruct.new(email: '[email protected]', first_name: 'John', last_name: 'Doe')
MailChimpSubscription.subscribe(user)
It’s probably a good idea to put mailing list subscription into a background job so that you don’t slow down your user creation response time. You can also handle transient errors, retry failed attempts etc.
middleman-blog middleman-syntax redcarpet
Github source code coloring
wget https://github.com/richleland/pygments-css/raw/master/github.css
def some_code
end
I got a message from a client this morning telling me that all users could see all reports on our product. Not good. I use CanCan to manage permissions and until now it has served me well. What went wrong? Whether a bug or not, I discovered that a very recent change I made had openned up the hole.
I wanted to have a permission setting that could prevent anyone from seeing any reports as well as more fine grained control over each individual report. My permissions looked a bit like this:
class Ability
def initialize(user)
can :read, Reports
can :read, Reports::ReportA
end
end
When checking permissions for another report within the module, I didn’t expect this:
module Reports
class ReportBController
def show
authorize! :read, Reports::ReportB #=> I assumed it would not be authorized but it is
...
end
end
end
What I didn’t expect is that when you authorise a module, all classes in that namespace are authorised as well. As I mentioned above, I don’t know if this is by design or not. Some quick googling didn’t help me so I changed my code for a quick solution.
I post this to warn others who may have made the same assumption. If you’re reading this and know the project better and can point out if it is a bug or feature, please let me know in the comments.
This tutorial specifically covers Logos 5 but things should also work in Logos 4 though the menus and tools may be in different places.
To get started you need to open the highlighting tool. Click on Tools and then Highlighting.
Each palette contains a few highlighters of similar types. To use them:
By default your highlighting is stored in a notes document named after the palette you used. So in this example my highlight is stored in a notes document named Highlighter Pens. I like to save my notes and highlights in specific note documents. This can be done by clicking on the little icon that appears to the right of the highlight palette name as you hover your mouse over the name (or right-click on the name) and selecting “Save in…”
The option I tend to use is Save in: Most recent note file. When I begin work I will ensure that I have one notes document open in Logos for the specific task I’m working on. That becomes the most recent note file and all my highlights and notes go in there. Be careful that you don’t end up with two notes documents open or your highlights will go to the one you last accessed. Remember that you have to change the Save in setting for each palette.
To remove a highlight:
You can highlight a number of different highlights on the screen, right-click and click Remove annotations and all selected highlights will be removed.
I like to have my Logos Bibles look like they’re underlined in pencil just like my real Bible. To do this, I’ve created my own highlighters. It’s super easy to do so I’m going to show you how.
Create your new style by:
This is a great place to play and personalise how your mark-up your books. Don’t be scared to create various styles or duplicate and modify existing styles from other palettes. You can also move styles between palettes.
Don’t forget to change your Save In: setting for your new palette.
If you have to click the specific highlighter every time you want to highlight something, it becomes a little tedious and you have to always have the highlighters panel open and visible (which means you can’t use the screen for other important documents). To solve this, you can set keyboard shortcuts to your highlighters. Let’s add a keyboard shortcut to our new highlighter style.
Click the little arrow icon next to the highlighter (mouse over the highlighter) (or right-click) Mouse over the Shortcut Key: menu Select the letter you want to assign to your highlighter. In this case I chose U for underline