[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#924381: unblock: nageru/1.8.4-1



Package: release.debian.org
Severity: normal
User: release.debian.org@packages.debian.org
Usertags: unblock

Hi,

I see that my upload of nageru 1.8.4-1 is going to be too late for
the freeze, so I'm requesting an exception.

The package contains upstream changes only (I am upstream); most of
them are fixes for bugs we discovered after using it intensively
over a weekend. Most of them cover Futatabi (the instant replay
server that is included in the package); Nageru itself would largely
be fine in 1.8.2; Futatabi would also _work_ and capable of doing
useful things, but it would be less stable than desired.

I've attached a debdiff. There are some minor new features included;
if you prefer, I can upload something like 1.8.4+really-1.8.2 to
sid with only the bugfixes, although it would probably be lower risk
to just allow the upstream version. The new features are:

 - Futatabi: Allow skipping to next clip while playing.
 - Futatabi: Allow queueing and playing clips without a cue-out.
 - Futatabi: Allow cue-in and cue-out padding to be set separately.
 - Futatabi: Allow hiding cameras in the UI.
 - Futatabi: Add some more Prometheus metrics.
 - Nageru: Use ALSA hardware timestamps for more stable delay
   (can arguably be taken as a bugfix).
 - Nageru: Make a few more actions controllable by MIDI.

The bugfixes are:

 - Futatabi: Fix crashes due to locked SQLite databases (critical).
 - Futatabi: Fix playing clips longer than 10 minutes.
 - Futatabi: Fix laggy video displays, by multithreading video decoding.
 - Futatabi: Fix high CPU usage when exporting files.
 - Futatabi: Fix compilation with external CEF, and compilation with newer
   CEF (not relevant for buster, since it does not ship CEF)
 - Futatabi: Fix a crash on startup (read-past-the-end of Prometheus
   metrics data).
 - Nageru and Futatabi: Many 32-bit fixes, in particular around
   using PRId64 instead of %ld in printf. (All GCC warnings fixed.)
 - Nageru: Fix reconnecting to Futatabi after a Futatabi crash
   (ie., you don't have to restart streaming if Futatabi goes down).
 - Nageru: Fix lag issues when playing back data from Futatabi.
 - Nageru: Fix audio transcoding in the headless transcoder; it was
   all broken (signed/unsigned confusion).
 - Nageru: Fix a crash on startup when video inputs were in use and
   the machine was heavily loaded.
 - Nageru: Fix performance issues when sending 1080p data to Futatabi.
 - Nageru: Fix a deadlock on startup when certain MIDI controllers
   were in use.

So that's a fairly long list. Hopefully it will make it easier to
make sense of the debdiff :-) If you'd like a focused patch set for
selected bugfixes, I can provide that.

unblock nageru/1.8.4-1

-- System Information:
Debian Release: buster/sid
  APT prefers testing-debug
  APT policy: (500, 'testing-debug'), (500, 'testing'), (500, 'stable')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 4.18.11 (SMP w/40 CPU cores)
Locale: LANG=en_DK.UTF-8, LC_CTYPE=en_DK.UTF-8 (charmap=UTF-8), LANGUAGE=en_NO:en_US:en_GB:en (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)
diff -Nru nageru-1.8.2/debian/changelog nageru-1.8.4/debian/changelog
--- nageru-1.8.2/debian/changelog	2019-01-19 22:58:59.000000000 +0100
+++ nageru-1.8.4/debian/changelog	2019-03-11 23:41:01.000000000 +0100
@@ -1,3 +1,17 @@
+nageru (1.8.4-1) unstable; urgency=high
+
+  * New upstream release.
+    * Fixes FTBFS on 32-bit platforms.
+
+ -- Steinar H. Gunderson <sesse@debian.org>  Mon, 11 Mar 2019 23:41:01 +0100
+
+nageru (1.8.3-1) unstable; urgency=high
+
+  * New upstream release.
+    * urgency=high due to the high amount of important upstream bug fixes.
+
+ -- Steinar H. Gunderson <sesse@debian.org>  Sun, 10 Mar 2019 20:24:25 +0100
+
 nageru (1.8.2-1) unstable; urgency=medium
 
   * New upstream release.
diff -Nru nageru-1.8.2/futatabi/behringer_cmd_pl1.midimapping nageru-1.8.4/futatabi/behringer_cmd_pl1.midimapping
--- nageru-1.8.2/futatabi/behringer_cmd_pl1.midimapping	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/behringer_cmd_pl1.midimapping	2019-03-11 23:40:21.000000000 +0100
@@ -23,6 +23,10 @@
 play_ready: { note_number: 35  velocity: 2 }
 playing: { note_number: 35 }
 
+# Next is mapped to fast-forward.
+next: { note_number: 37 }
+next_ready: { note_number: 37 }
+
 # Queue is marked to Cue; close enough.
 queue: { note_number: 34 }
 queue_enabled: { note_number: 34 }
diff -Nru nageru-1.8.2/futatabi/db.cpp nageru-1.8.4/futatabi/db.cpp
--- nageru-1.8.2/futatabi/db.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/db.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -15,16 +15,28 @@
 		exit(1);
 	}
 
+	// Set an effectively infinite timeout for waiting for write locks;
+	// if we get SQLITE_LOCKED, we just exit out, so this is much better.
+	ret = sqlite3_busy_timeout(db, 3600000);
+	if (ret != SQLITE_OK) {
+		fprintf(stderr, "sqlite3_busy_timeout: %s\n", sqlite3_errmsg(db));
+		exit(1);
+	}
+
 	sqlite3_exec(db, R"(
 		CREATE TABLE IF NOT EXISTS state (state BLOB);
 	)",
 	             nullptr, nullptr, nullptr);  // Ignore errors.
 
+	sqlite3_exec(db, "CREATE UNIQUE INDEX only_one_state ON state (1);", nullptr, nullptr, nullptr);  // Ignore errors.
+
 	sqlite3_exec(db, R"(
 		CREATE TABLE IF NOT EXISTS settings (settings BLOB);
 	)",
 	             nullptr, nullptr, nullptr);  // Ignore errors.
 
+	sqlite3_exec(db, "CREATE UNIQUE INDEX only_one_settings ON settings (1);", nullptr, nullptr, nullptr);  // Ignore errors.
+
 	sqlite3_exec(db, R"(
 		DROP TABLE file;
 	)",
@@ -92,16 +104,10 @@
 		exit(1);
 	}
 
-	ret = sqlite3_exec(db, "DELETE FROM state", nullptr, nullptr, nullptr);
-	if (ret != SQLITE_OK) {
-		fprintf(stderr, "DELETE: %s\n", sqlite3_errmsg(db));
-		exit(1);
-	}
-
 	sqlite3_stmt *stmt;
-	ret = sqlite3_prepare_v2(db, "INSERT INTO state VALUES (?)", -1, &stmt, 0);
+	ret = sqlite3_prepare_v2(db, "REPLACE INTO state VALUES (?)", -1, &stmt, 0);
 	if (ret != SQLITE_OK) {
-		fprintf(stderr, "INSERT prepare: %s\n", sqlite3_errmsg(db));
+		fprintf(stderr, "REPLACE prepare: %s\n", sqlite3_errmsg(db));
 		exit(1);
 	}
 
@@ -109,13 +115,13 @@
 
 	ret = sqlite3_step(stmt);
 	if (ret == SQLITE_ROW) {
-		fprintf(stderr, "INSERT step: %s\n", sqlite3_errmsg(db));
+		fprintf(stderr, "REPLACE step: %s\n", sqlite3_errmsg(db));
 		exit(1);
 	}
 
 	ret = sqlite3_finalize(stmt);
 	if (ret != SQLITE_OK) {
-		fprintf(stderr, "INSERT finalize: %s\n", sqlite3_errmsg(db));
+		fprintf(stderr, "REPLACE finalize: %s\n", sqlite3_errmsg(db));
 		exit(1);
 	}
 
@@ -169,16 +175,10 @@
 		exit(1);
 	}
 
-	ret = sqlite3_exec(db, "DELETE FROM settings", nullptr, nullptr, nullptr);
-	if (ret != SQLITE_OK) {
-		fprintf(stderr, "DELETE: %s\n", sqlite3_errmsg(db));
-		exit(1);
-	}
-
 	sqlite3_stmt *stmt;
-	ret = sqlite3_prepare_v2(db, "INSERT INTO settings VALUES (?)", -1, &stmt, 0);
+	ret = sqlite3_prepare_v2(db, "REPLACE INTO settings VALUES (?)", -1, &stmt, 0);
 	if (ret != SQLITE_OK) {
-		fprintf(stderr, "INSERT prepare: %s\n", sqlite3_errmsg(db));
+		fprintf(stderr, "REPLACE prepare: %s\n", sqlite3_errmsg(db));
 		exit(1);
 	}
 
@@ -186,13 +186,13 @@
 
 	ret = sqlite3_step(stmt);
 	if (ret == SQLITE_ROW) {
-		fprintf(stderr, "INSERT step: %s\n", sqlite3_errmsg(db));
+		fprintf(stderr, "REPLACE step: %s\n", sqlite3_errmsg(db));
 		exit(1);
 	}
 
 	ret = sqlite3_finalize(stmt);
 	if (ret != SQLITE_OK) {
-		fprintf(stderr, "INSERT finalize: %s\n", sqlite3_errmsg(db));
+		fprintf(stderr, "REPLACE finalize: %s\n", sqlite3_errmsg(db));
 		exit(1);
 	}
 
@@ -262,26 +262,6 @@
 	// Delete any existing instances with this filename.
 	sqlite3_stmt *stmt;
 
-	ret = sqlite3_prepare_v2(db, "DELETE FROM filev2 WHERE filename=?", -1, &stmt, 0);
-	if (ret != SQLITE_OK) {
-		fprintf(stderr, "DELETE prepare: %s\n", sqlite3_errmsg(db));
-		exit(1);
-	}
-
-	sqlite3_bind_text(stmt, 1, filename.data(), filename.size(), SQLITE_STATIC);
-
-	ret = sqlite3_step(stmt);
-	if (ret == SQLITE_ROW) {
-		fprintf(stderr, "DELETE step: %s\n", sqlite3_errmsg(db));
-		exit(1);
-	}
-
-	ret = sqlite3_finalize(stmt);
-	if (ret != SQLITE_OK) {
-		fprintf(stderr, "DELETE finalize: %s\n", sqlite3_errmsg(db));
-		exit(1);
-	}
-
 	// Create the protobuf blob for the new row.
 	FileContentsProto file_contents;
 	unordered_set<unsigned> seen_stream_idx;  // Usually only one.
@@ -307,7 +287,7 @@
 	file_contents.SerializeToString(&serialized);
 
 	// Insert the new row.
-	ret = sqlite3_prepare_v2(db, "INSERT INTO filev2 (filename, size, frames) VALUES (?, ?, ?)", -1, &stmt, 0);
+	ret = sqlite3_prepare_v2(db, "REPLACE INTO filev2 (filename, size, frames) VALUES (?, ?, ?)", -1, &stmt, 0);
 	if (ret != SQLITE_OK) {
 		fprintf(stderr, "INSERT prepare: %s\n", sqlite3_errmsg(db));
 		exit(1);
@@ -319,13 +299,13 @@
 
 	ret = sqlite3_step(stmt);
 	if (ret == SQLITE_ROW) {
-		fprintf(stderr, "INSERT step: %s\n", sqlite3_errmsg(db));
+		fprintf(stderr, "REPLACE step: %s\n", sqlite3_errmsg(db));
 		exit(1);
 	}
 
 	ret = sqlite3_finalize(stmt);
 	if (ret != SQLITE_OK) {
-		fprintf(stderr, "INSERT finalize: %s\n", sqlite3_errmsg(db));
+		fprintf(stderr, "REPLACE finalize: %s\n", sqlite3_errmsg(db));
 		exit(1);
 	}
 
diff -Nru nageru-1.8.2/futatabi/defs.h nageru-1.8.4/futatabi/defs.h
--- nageru-1.8.2/futatabi/defs.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/defs.h	2019-03-11 23:40:21.000000000 +0100
@@ -4,6 +4,7 @@
 #define MAX_STREAMS 16
 #define CACHE_SIZE_MB 2048
 #define MUX_BUFFER_SIZE 10485760
+#define FRAMES_PER_FILE 1000
 
 #define DEFAULT_HTTPD_PORT 9096
 
diff -Nru nageru-1.8.2/futatabi/export.cpp nageru-1.8.4/futatabi/export.cpp
--- nageru-1.8.2/futatabi/export.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/export.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -227,7 +227,7 @@
 	for (const Clip &clip : clips) {
 		clips_with_id.emplace_back(ClipWithID{ clip, 0 });
 	}
-	double total_length = compute_total_time(clips_with_id);
+	TimeRemaining total_length = compute_total_time(clips_with_id);
 
 	promise<void> done_promise;
 	future<void> done = done_promise.get_future();
@@ -237,8 +237,8 @@
 	player.set_done_callback([&done_promise] {
 		done_promise.set_value();
 	});
-	player.set_progress_callback([&current_value, total_length](const std::map<uint64_t, double> &player_progress, double time_remaining) {
-		current_value = 1.0 - time_remaining / total_length;
+	player.set_progress_callback([&current_value, total_length](const std::map<uint64_t, double> &player_progress, TimeRemaining time_remaining) {
+		current_value = 1.0 - time_remaining.t / total_length.t;  // Nothing to do about the infinite clips.
 	});
 	player.play(clips_with_id);
 	while (done.wait_for(std::chrono::milliseconds(100)) != future_status::ready && !progress.wasCanceled()) {
diff -Nru nageru-1.8.2/futatabi/flags.cpp nageru-1.8.4/futatabi/flags.cpp
--- nageru-1.8.2/futatabi/flags.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/flags.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -17,8 +17,9 @@
 	OPTION_SLOW_DOWN_INPUT = 1001,
 	OPTION_HTTP_PORT = 1002,
 	OPTION_TALLY_URL = 1003,
-	OPTION_CUE_POINT_PADDING = 1004,
-	OPTION_MIDI_MAPPING = 1005
+	OPTION_CUE_IN_POINT_PADDING = 1004,
+	OPTION_CUE_OUT_POINT_PADDING = 1005,
+	OPTION_MIDI_MAPPING = 1006
 };
 
 void usage()
@@ -37,7 +38,8 @@
 	fprintf(stderr, "                                  2 = default (realtime 720p on fast embedded GPUs)\n");
 	fprintf(stderr, "                                  3 = good (realtime 720p on GTX 970 or so)\n");
 	fprintf(stderr, "                                  4 = best (not realtime on any current GPU)\n");
-	fprintf(stderr, "      --cue-point-padding SECS    move cue-in/cue-out N seconds earlier/later on set\n");
+	fprintf(stderr, "      --cue-in-point-padding SECS   move cue-in N seconds earlier on set\n");
+	fprintf(stderr, "      --cue-out-point-padding SECS  move cue-out N seconds later on set\n");
 	fprintf(stderr, "  -d, --working-directory DIR     where to store frames and database\n");
 	fprintf(stderr, "      --http-port PORT            which port to listen on for output\n");
 	fprintf(stderr, "      --tally-url URL             URL to get tally color from (polled every 100 ms)\n");
@@ -56,7 +58,8 @@
 		{ "working-directory", required_argument, 0, 'd' },
 		{ "http-port", required_argument, 0, OPTION_HTTP_PORT },
 		{ "tally-url", required_argument, 0, OPTION_TALLY_URL },
-		{ "cue-point-padding", required_argument, 0, OPTION_CUE_POINT_PADDING },
+		{ "cue-in-point-padding", required_argument, 0, OPTION_CUE_IN_POINT_PADDING },
+		{ "cue-out-point-padding", required_argument, 0, OPTION_CUE_OUT_POINT_PADDING },
 		{ "midi-mapping", required_argument, 0, OPTION_MIDI_MAPPING },
 		{ 0, 0, 0, 0 }
 	};
@@ -102,9 +105,13 @@
 		case OPTION_TALLY_URL:
 			global_flags.tally_url = optarg;
 			break;
-		case OPTION_CUE_POINT_PADDING:
-			global_flags.cue_point_padding_seconds = atof(optarg);
-			global_flags.cue_point_padding_set = true;
+		case OPTION_CUE_IN_POINT_PADDING:
+			global_flags.cue_in_point_padding_seconds = atof(optarg);
+			global_flags.cue_in_point_padding_set = true;
+			break;
+		case OPTION_CUE_OUT_POINT_PADDING:
+			global_flags.cue_out_point_padding_seconds = atof(optarg);
+			global_flags.cue_out_point_padding_set = true;
 			break;
 		case OPTION_MIDI_MAPPING:
 			global_flags.midi_mapping_filename = optarg;
@@ -125,7 +132,8 @@
 		usage();
 		exit(1);
 	}
-	if (global_flags.cue_point_padding_seconds < 0.0) {
+	if (global_flags.cue_in_point_padding_seconds < 0.0 ||
+	    global_flags.cue_out_point_padding_seconds < 0.0) {
 		fprintf(stderr, "Cue point padding cannot be negative.\n");
 		usage();
 		exit(1);
diff -Nru nageru-1.8.2/futatabi/flags.h nageru-1.8.4/futatabi/flags.h
--- nageru-1.8.2/futatabi/flags.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/flags.h	2019-03-11 23:40:21.000000000 +0100
@@ -15,8 +15,10 @@
 	uint16_t http_port = DEFAULT_HTTPD_PORT;
 	double output_framerate = 60000.0 / 1001.0;
 	std::string tally_url;
-	double cue_point_padding_seconds = 0.0;  // Can be changed in the menus.
-	bool cue_point_padding_set = false;
+	double cue_in_point_padding_seconds = 0.0;  // Can be changed in the menus.
+	bool cue_in_point_padding_set = false;
+	double cue_out_point_padding_seconds = 0.0;  // Can be changed in the menus.
+	bool cue_out_point_padding_set = false;
 	std::string midi_mapping_filename;  // Empty for none.
 };
 extern Flags global_flags;
diff -Nru nageru-1.8.2/futatabi/futatabi_midi_mapping.proto nageru-1.8.4/futatabi/futatabi_midi_mapping.proto
--- nageru-1.8.2/futatabi/futatabi_midi_mapping.proto	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/futatabi_midi_mapping.proto	2019-03-11 23:40:21.000000000 +0100
@@ -50,6 +50,10 @@
 	optional MIDILightProto playing = 26;
 	optional MIDILightProto play_ready = 40;
 
+	optional MIDIButtonProto next = 45;
+	optional int32 next_button_bank = 46;
+	optional MIDILightProto next_ready = 47;
+
 	optional MIDIButtonProto toggle_lock = 36;
 	optional int32 toggle_lock_bank = 37;
 	optional MIDILightProto locked = 38;
diff -Nru nageru-1.8.2/futatabi/jpeg_frame_view.cpp nageru-1.8.4/futatabi/jpeg_frame_view.cpp
--- nageru-1.8.2/futatabi/jpeg_frame_view.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/jpeg_frame_view.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -59,19 +59,6 @@
 	size_t last_used;
 };
 
-struct PendingDecode {
-	JPEGFrameView *destination;
-
-	// For actual decodes (only if frame below is nullptr).
-	FrameOnDisk primary, secondary;
-	float fade_alpha;  // Irrelevant if secondary.stream_idx == -1.
-
-	// Already-decoded frames are also sent through PendingDecode,
-	// so that they get drawn in the right order. If frame is nullptr,
-	// it's a real decode.
-	shared_ptr<Frame> frame;
-};
-
 // There can be multiple JPEGFrameView instances, so make all the metrics static.
 once_flag jpeg_metrics_inited;
 atomic<int64_t> metric_jpeg_cache_used_bytes{ 0 };  // Same value as cache_bytes_used.
@@ -86,12 +73,9 @@
 
 }  // namespace
 
-thread JPEGFrameView::jpeg_decoder_thread;
 mutex cache_mu;
 map<FrameOnDisk, LRUFrame, FrameOnDiskLexicalOrder> cache;  // Under cache_mu.
 size_t cache_bytes_used = 0;  // Under cache_mu.
-condition_variable any_pending_decodes;
-deque<PendingDecode> pending_decodes;  // Under cache_mu.
 atomic<size_t> event_counter{ 0 };
 extern QGLWidget *global_share_widget;
 extern atomic<bool> should_quit;
@@ -277,7 +261,7 @@
 		CacheMissBehavior cache_miss_behavior = DECODE_IF_NOT_IN_CACHE;
 		{
 			unique_lock<mutex> lock(cache_mu);  // TODO: Perhaps under another lock?
-			any_pending_decodes.wait(lock, [] {
+			any_pending_decodes.wait(lock, [this] {
 				return !pending_decodes.empty() || should_quit.load();
 			});
 			if (should_quit.load())
@@ -285,20 +269,14 @@
 			decode = pending_decodes.front();
 			pending_decodes.pop_front();
 
-			size_t num_pending = 0;
-			for (const PendingDecode &other_decode : pending_decodes) {
-				if (other_decode.destination == decode.destination) {
-					++num_pending;
-				}
-			}
-			if (num_pending > 3) {
+			if (pending_decodes.size() > 3) {
 				cache_miss_behavior = RETURN_NULLPTR_IF_NOT_IN_CACHE;
 			}
 		}
 
 		if (decode.frame != nullptr) {
 			// Already decoded, so just show it.
-			decode.destination->setDecodedFrame(decode.frame, nullptr, 1.0f);
+			setDecodedFrame(decode.frame, nullptr, 1.0f);
 			continue;
 		}
 
@@ -312,7 +290,7 @@
 			}
 
 			bool found_in_cache;
-			shared_ptr<Frame> frame = decode_jpeg_with_cache(frame_spec, cache_miss_behavior, &decode.destination->frame_reader, &found_in_cache);
+			shared_ptr<Frame> frame = decode_jpeg_with_cache(frame_spec, cache_miss_behavior, &frame_reader, &found_in_cache);
 
 			if (frame == nullptr) {
 				assert(cache_miss_behavior == RETURN_NULLPTR_IF_NOT_IN_CACHE);
@@ -339,11 +317,11 @@
 		}
 
 		// TODO: Could we get jitter between non-interpolated and interpolated frames here?
-		decode.destination->setDecodedFrame(primary_frame, secondary_frame, decode.fade_alpha);
+		setDecodedFrame(primary_frame, secondary_frame, decode.fade_alpha);
 	}
 }
 
-void JPEGFrameView::shutdown()
+JPEGFrameView::~JPEGFrameView()
 {
 	any_pending_decodes.notify_all();
 	jpeg_decoder_thread.join();
@@ -374,7 +352,6 @@
 	decode.primary = frame;
 	decode.secondary = secondary_frame;
 	decode.fade_alpha = fade_alpha;
-	decode.destination = this;
 	pending_decodes.push_back(decode);
 	any_pending_decodes.notify_all();
 }
@@ -384,24 +361,18 @@
 	lock_guard<mutex> lock(cache_mu);
 	PendingDecode decode;
 	decode.frame = std::move(frame);
-	decode.destination = this;
 	pending_decodes.push_back(decode);
 	any_pending_decodes.notify_all();
 }
 
-ResourcePool *resource_pool = nullptr;
-
 void JPEGFrameView::initializeGL()
 {
 	glDisable(GL_BLEND);
 	glDisable(GL_DEPTH_TEST);
 	check_error();
 
-	static once_flag once;
-	call_once(once, [] {
-		resource_pool = new ResourcePool;
-		jpeg_decoder_thread = std::thread(jpeg_decoder_thread_func);
-	});
+	resource_pool = new ResourcePool;
+	jpeg_decoder_thread = std::thread(&JPEGFrameView::jpeg_decoder_thread_func, this);
 
 	ycbcr_converter.reset(new YCbCrConverter(YCbCrConverter::OUTPUT_TO_RGBA, resource_pool));
 
diff -Nru nageru-1.8.2/futatabi/jpeg_frame_view.h nageru-1.8.4/futatabi/jpeg_frame_view.h
--- nageru-1.8.2/futatabi/jpeg_frame_view.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/jpeg_frame_view.h	2019-03-11 23:40:21.000000000 +0100
@@ -13,6 +13,8 @@
 #include <movit/mix_effect.h>
 #include <movit/ycbcr_input.h>
 #include <stdint.h>
+#include <condition_variable>
+#include <deque>
 #include <thread>
 
 enum CacheMissBehavior {
@@ -29,6 +31,7 @@
 
 public:
 	JPEGFrameView(QWidget *parent);
+	~JPEGFrameView();
 
 	void setFrame(unsigned stream_idx, FrameOnDisk frame, FrameOnDisk secondary_frame = {}, float fade_alpha = 0.0f);
 	void setFrame(std::shared_ptr<Frame> frame);
@@ -40,8 +43,6 @@
 	void setDecodedFrame(std::shared_ptr<Frame> frame, std::shared_ptr<Frame> secondary_frame, float fade_alpha);
 	void set_overlay(const std::string &text);  // Blank for none.
 
-	static void shutdown();
-
 signals:
 	void clicked();
 
@@ -51,7 +52,7 @@
 	void paintGL() override;
 
 private:
-	static void jpeg_decoder_thread_func();
+	void jpeg_decoder_thread_func();
 
 	FrameReader frame_reader;
 
@@ -73,7 +74,22 @@
 
 	int gl_width, gl_height;
 
-	static std::thread jpeg_decoder_thread;
+	std::thread jpeg_decoder_thread;
+	movit::ResourcePool *resource_pool = nullptr;
+
+	struct PendingDecode {
+		// For actual decodes (only if frame below is nullptr).
+		FrameOnDisk primary, secondary;
+		float fade_alpha;  // Irrelevant if secondary.stream_idx == -1.
+
+		// Already-decoded frames are also sent through PendingDecode,
+		// so that they get drawn in the right order. If frame is nullptr,
+		// it's a real decode.
+		std::shared_ptr<Frame> frame;
+	};
+
+	std::condition_variable any_pending_decodes;
+	std::deque<PendingDecode> pending_decodes;  // Under cache_mu.
 };
 
 #endif  // !defined(_JPEG_FRAME_VIEW_H)
diff -Nru nageru-1.8.2/futatabi/main.cpp nageru-1.8.4/futatabi/main.cpp
--- nageru-1.8.2/futatabi/main.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/main.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -79,7 +79,7 @@
 {
 	if (open_frame_files.count(stream_idx) == 0) {
 		char filename[256];
-		snprintf(filename, sizeof(filename), "%s/frames/cam%d-pts%09ld.frames",
+		snprintf(filename, sizeof(filename), "%s/frames/cam%d-pts%09" PRId64 ".frames",
 		         global_flags.working_directory.c_str(), stream_idx, pts);
 		FILE *fp = fopen(filename, "wb");
 		if (fp == nullptr) {
@@ -145,7 +145,7 @@
 		frames[stream_idx].push_back(frame);
 	}
 
-	if (++file.frames_written_so_far >= 1000) {
+	if (++file.frames_written_so_far >= FRAMES_PER_FILE) {
 		size_t size = ftell(file.fp);
 
 		// Start a new file next time.
@@ -255,6 +255,12 @@
 
 	load_existing_frames();
 
+	for (int stream_idx = 0; stream_idx < MAX_STREAMS; ++stream_idx) {
+		if (!frames[stream_idx].empty()) {
+			assert(start_pts > frames[stream_idx].back().pts);
+		}
+	}
+
 	MainWindow main_window;
 	main_window.show();
 
@@ -269,7 +275,6 @@
 
 	should_quit = true;
 	record_thread.join();
-	JPEGFrameView::shutdown();
 
 	return ret;
 }
@@ -529,8 +534,10 @@
 			current_pts = pts;
 		}
 
-		fprintf(stderr, "%s: Hit EOF. Waiting one second and trying again...\n", global_flags.stream_source.c_str());
-		sleep(1);
+		if (!should_quit.load()) {
+			fprintf(stderr, "%s: Hit EOF. Waiting one second and trying again...\n", global_flags.stream_source.c_str());
+			sleep(1);
+		}
 
 		start_pts = last_pts + TIMEBASE;
 	}
diff -Nru nageru-1.8.2/futatabi/mainwindow.cpp nageru-1.8.4/futatabi/mainwindow.cpp
--- nageru-1.8.2/futatabi/mainwindow.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/mainwindow.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -65,8 +65,11 @@
 			global_flags.interpolation_quality = settings.interpolation_quality() - 1;
 		}
 	}
-	if (!global_flags.cue_point_padding_set) {
-		global_flags.cue_point_padding_seconds = settings.cue_point_padding_seconds();  // Default 0 is fine.
+	if (!global_flags.cue_in_point_padding_set) {
+		global_flags.cue_in_point_padding_seconds = settings.cue_in_point_padding_seconds();  // Default 0 is fine.
+	}
+	if (!global_flags.cue_out_point_padding_set) {
+		global_flags.cue_out_point_padding_seconds = settings.cue_out_point_padding_seconds();  // Default 0 is fine.
 	}
 	if (global_flags.interpolation_quality == 0) {
 		// Allocate something just for simplicity; we won't be using it
@@ -115,27 +118,49 @@
 	connect(ui->quality_3_action, &QAction::toggled, bind(&MainWindow::quality_toggled, this, 3, _1));
 	connect(ui->quality_4_action, &QAction::toggled, bind(&MainWindow::quality_toggled, this, 4, _1));
 
-	// The cue point padding group.
-	QActionGroup *padding_group = new QActionGroup(ui->interpolation_menu);
-	padding_group->addAction(ui->padding_0_action);
-	padding_group->addAction(ui->padding_1_action);
-	padding_group->addAction(ui->padding_2_action);
-	padding_group->addAction(ui->padding_5_action);
-	if (global_flags.cue_point_padding_seconds <= 1e-3) {
-		ui->padding_0_action->setChecked(true);
-	} else if (fabs(global_flags.cue_point_padding_seconds - 1.0) < 1e-3) {
-		ui->padding_1_action->setChecked(true);
-	} else if (fabs(global_flags.cue_point_padding_seconds - 2.0) < 1e-3) {
-		ui->padding_2_action->setChecked(true);
-	} else if (fabs(global_flags.cue_point_padding_seconds - 5.0) < 1e-3) {
-		ui->padding_5_action->setChecked(true);
+	// The cue-in point padding group.
+	QActionGroup *in_padding_group = new QActionGroup(ui->in_padding_menu);
+	in_padding_group->addAction(ui->in_padding_0_action);
+	in_padding_group->addAction(ui->in_padding_1_action);
+	in_padding_group->addAction(ui->in_padding_2_action);
+	in_padding_group->addAction(ui->in_padding_5_action);
+	if (global_flags.cue_in_point_padding_seconds <= 1e-3) {
+		ui->in_padding_0_action->setChecked(true);
+	} else if (fabs(global_flags.cue_in_point_padding_seconds - 1.0) < 1e-3) {
+		ui->in_padding_1_action->setChecked(true);
+	} else if (fabs(global_flags.cue_in_point_padding_seconds - 2.0) < 1e-3) {
+		ui->in_padding_2_action->setChecked(true);
+	} else if (fabs(global_flags.cue_in_point_padding_seconds - 5.0) < 1e-3) {
+		ui->in_padding_5_action->setChecked(true);
+	} else {
+		// Nothing to check, which is fine.
+	}
+	connect(ui->in_padding_0_action, &QAction::toggled, bind(&MainWindow::in_padding_toggled, this, 0.0, _1));
+	connect(ui->in_padding_1_action, &QAction::toggled, bind(&MainWindow::in_padding_toggled, this, 1.0, _1));
+	connect(ui->in_padding_2_action, &QAction::toggled, bind(&MainWindow::in_padding_toggled, this, 2.0, _1));
+	connect(ui->in_padding_5_action, &QAction::toggled, bind(&MainWindow::in_padding_toggled, this, 5.0, _1));
+
+	// Same for the cue-out padding.
+	QActionGroup *out_padding_group = new QActionGroup(ui->out_padding_menu);
+	out_padding_group->addAction(ui->out_padding_0_action);
+	out_padding_group->addAction(ui->out_padding_1_action);
+	out_padding_group->addAction(ui->out_padding_2_action);
+	out_padding_group->addAction(ui->out_padding_5_action);
+	if (global_flags.cue_out_point_padding_seconds <= 1e-3) {
+		ui->out_padding_0_action->setChecked(true);
+	} else if (fabs(global_flags.cue_out_point_padding_seconds - 1.0) < 1e-3) {
+		ui->out_padding_1_action->setChecked(true);
+	} else if (fabs(global_flags.cue_out_point_padding_seconds - 2.0) < 1e-3) {
+		ui->out_padding_2_action->setChecked(true);
+	} else if (fabs(global_flags.cue_out_point_padding_seconds - 5.0) < 1e-3) {
+		ui->out_padding_5_action->setChecked(true);
 	} else {
 		// Nothing to check, which is fine.
 	}
-	connect(ui->padding_0_action, &QAction::toggled, bind(&MainWindow::padding_toggled, this, 0.0, _1));
-	connect(ui->padding_1_action, &QAction::toggled, bind(&MainWindow::padding_toggled, this, 1.0, _1));
-	connect(ui->padding_2_action, &QAction::toggled, bind(&MainWindow::padding_toggled, this, 2.0, _1));
-	connect(ui->padding_5_action, &QAction::toggled, bind(&MainWindow::padding_toggled, this, 5.0, _1));
+	connect(ui->out_padding_0_action, &QAction::toggled, bind(&MainWindow::out_padding_toggled, this, 0.0, _1));
+	connect(ui->out_padding_1_action, &QAction::toggled, bind(&MainWindow::out_padding_toggled, this, 1.0, _1));
+	connect(ui->out_padding_2_action, &QAction::toggled, bind(&MainWindow::out_padding_toggled, this, 2.0, _1));
+	connect(ui->out_padding_5_action, &QAction::toggled, bind(&MainWindow::out_padding_toggled, this, 5.0, _1));
 
 	global_disk_space_estimator = new DiskSpaceEstimator(bind(&MainWindow::report_disk_space, this, _1, _2));
 	disk_free_label = new QLabel(this);
@@ -180,6 +205,10 @@
 	connect(play, &QShortcut::activated, ui->play_btn, &QPushButton::click);
 	connect(ui->play_btn, &QPushButton::clicked, this, &MainWindow::play_clicked);
 
+	QShortcut *next = new QShortcut(QKeySequence(Qt::Key_N), this);
+	connect(next, &QShortcut::activated, ui->next_btn, &QPushButton::click);
+	connect(ui->next_btn, &QPushButton::clicked, this, &MainWindow::next_clicked);
+
 	connect(ui->stop_btn, &QPushButton::clicked, this, &MainWindow::stop_clicked);
 	ui->stop_btn->setEnabled(false);
 
@@ -217,7 +246,7 @@
 			live_player_done();
 		});
 	});
-	live_player->set_progress_callback([this](const map<uint64_t, double> &progress, double time_remaining) {
+	live_player->set_progress_callback([this](const map<uint64_t, double> &progress, TimeRemaining time_remaining) {
 		post_to_main_thread([this, progress, time_remaining] {
 			live_player_clip_progress(progress, time_remaining);
 		});
@@ -272,9 +301,17 @@
 {
 	assert(num_cameras >= displays.size());  // We only add, never remove.
 
+	// Make new entries to hide the displays.
+	for (unsigned i = displays.size(); i < num_cameras; ++i) {
+		char title[256];
+		snprintf(title, sizeof(title), "Camera %u", i + 1);
+		QAction *hide_action = ui->hide_camera_menu->addAction(title);
+		hide_action->setCheckable(true);
+		hide_action->setChecked(false);
+		connect(hide_action, &QAction::toggled, bind(&MainWindow::hide_camera_toggled, this, i, _1));
+	}
+
 	// Make new display rows.
-	unsigned display_rows = (num_cameras + 1) / 2;
-	ui->video_displays->setStretch(1, display_rows);
 	for (unsigned i = displays.size(); i < num_cameras; ++i) {
 		QFrame *frame = new QFrame(this);
 		frame->setAutoFillBackground(true);
@@ -287,7 +324,6 @@
 		display->setAutoFillBackground(true);
 		layout->addWidget(display);
 
-		ui->input_displays->addWidget(frame, i / 2, i % 2);
 		display->set_overlay(to_string(i + 1));
 
 		QPushButton *preview_btn = new QPushButton(this);
@@ -295,7 +331,7 @@
 		preview_btn->setText(QString::fromStdString(to_string(i + 1)));
 		ui->preview_layout->addWidget(preview_btn);
 
-		displays.emplace_back(FrameAndDisplay{ frame, display, preview_btn });
+		displays.emplace_back(FrameAndDisplay{ frame, display, preview_btn, /*hidden=*/false });
 
 		connect(display, &JPEGFrameView::clicked, preview_btn, &QPushButton::click);
 		QShortcut *shortcut = new QShortcut(QKeySequence(Qt::Key_1 + i), this);
@@ -303,6 +339,7 @@
 
 		connect(preview_btn, &QPushButton::clicked, [this, i] { preview_angle_clicked(i); });
 	}
+	relayout_displays();
 
 	cliplist_clips->change_num_cameras(num_cameras);
 	playlist_clips->change_num_cameras(num_cameras);
@@ -310,6 +347,28 @@
 	QMetaObject::invokeMethod(this, "relayout", Qt::QueuedConnection);
 }
 
+void MainWindow::relayout_displays()
+{
+	while (ui->input_displays->count() > 0) {
+		QLayoutItem *item = ui->input_displays->takeAt(0);
+		ui->input_displays->removeWidget(item->widget());
+	}
+
+	unsigned cell_idx = 0;
+	for (unsigned i = 0; i < displays.size(); ++i) {
+		if (displays[i].hidden) {
+			displays[i].frame->setVisible(false);
+		} else {
+			displays[i].frame->setVisible(true);
+			ui->input_displays->addWidget(displays[i].frame, cell_idx / 2, cell_idx % 2);
+			++cell_idx;
+		}
+	}
+	ui->video_displays->setStretch(1, (cell_idx + 1) / 2);
+
+	QMetaObject::invokeMethod(this, "relayout", Qt::QueuedConnection);
+}
+
 MainWindow::~MainWindow()
 {
 	// We don't have a context to release Player's OpenGL resources in here,
@@ -324,7 +383,7 @@
 		cliplist_clips->mutable_back()->pts_in = current_pts;
 	} else {
 		Clip clip;
-		clip.pts_in = max<int64_t>(current_pts - lrint(global_flags.cue_point_padding_seconds * TIMEBASE), 0);
+		clip.pts_in = max<int64_t>(current_pts - lrint(global_flags.cue_in_point_padding_seconds * TIMEBASE), 0);
 		cliplist_clips->add_clip(clip);
 		playlist_selection_changed();
 	}
@@ -343,7 +402,7 @@
 		return;
 	}
 
-	cliplist_clips->mutable_back()->pts_out = current_pts + lrint(global_flags.cue_point_padding_seconds * TIMEBASE);
+	cliplist_clips->mutable_back()->pts_out = current_pts + lrint(global_flags.cue_out_point_padding_seconds * TIMEBASE);
 
 	// Select the item so that we can jog it.
 	ui->clip_list->setFocus();
@@ -365,11 +424,9 @@
 	if (!selected->hasSelection()) {
 		Clip clip = *cliplist_clips->back();
 		clip.stream_idx = 0;
-		if (clip.pts_out != -1) {
-			playlist_clips->add_clip(clip);
-			playlist_selection_changed();
-			ui->playlist->scrollToBottom();
-		}
+		playlist_clips->add_clip(clip);
+		playlist_selection_changed();
+		ui->playlist->scrollToBottom();
 		return;
 	}
 
@@ -381,15 +438,13 @@
 		clip.stream_idx = ui->preview_display->get_stream_idx();
 	}
 
-	if (clip.pts_out != -1) {
-		playlist_clips->add_clip(clip);
-		playlist_selection_changed();
-		ui->playlist->scrollToBottom();
-		if (!ui->playlist->selectionModel()->hasSelection()) {
-			// TODO: Figure out why this doesn't always seem to actually select the row.
-			QModelIndex bottom = playlist_clips->index(playlist_clips->size() - 1, 0);
-			ui->playlist->setCurrentIndex(bottom);
-		}
+	playlist_clips->add_clip(clip);
+	playlist_selection_changed();
+	ui->playlist->scrollToBottom();
+	if (!ui->playlist->selectionModel()->hasSelection()) {
+		// TODO: Figure out why this doesn't always seem to actually select the row.
+		QModelIndex bottom = playlist_clips->index(playlist_clips->size() - 1, 0);
+		ui->playlist->setCurrentIndex(bottom);
 	}
 }
 
@@ -428,6 +483,9 @@
 	} else {
 		clip.stream_idx = ui->preview_display->get_stream_idx();
 	}
+	if (clip.pts_out == -1) {
+		clip.pts_out = clip.pts_in + int64_t(TIMEBASE) * 86400 * 7;  // One week; effectively infinite, but without overflow issues.
+	}
 	preview_player->play(clip);
 	preview_playing = true;
 	enable_or_disable_preview_button();
@@ -609,7 +667,8 @@
 {
 	SettingsProto settings;
 	settings.set_interpolation_quality(global_flags.interpolation_quality + 1);
-	settings.set_cue_point_padding_seconds(global_flags.cue_point_padding_seconds);
+	settings.set_cue_in_point_padding_seconds(global_flags.cue_in_point_padding_seconds);
+	settings.set_cue_out_point_padding_seconds(global_flags.cue_out_point_padding_seconds);
 	db.store_settings(settings);
 }
 
@@ -628,7 +687,11 @@
 
 	vector<ClipWithID> clips;
 	for (unsigned row = start_row; row < playlist_clips->size(); ++row) {
-		clips.emplace_back(*playlist_clips->clip_with_id(row));
+		ClipWithID clip = *playlist_clips->clip_with_id(row);
+		if (clip.clip.pts_out == -1) {
+			clip.clip.pts_out = clip.clip.pts_in + int64_t(TIMEBASE) * 86400 * 7;  // One week; effectively infinite, but without overflow issues.
+		}
+		clips.emplace_back(clip);
 	}
 	live_player->play(clips);
 	playlist_clips->set_progress({ { start_row, 0.0f } });
@@ -637,6 +700,11 @@
 	playlist_selection_changed();
 }
 
+void MainWindow::next_clicked()
+{
+	live_player->skip_to_next();
+}
+
 void MainWindow::stop_clicked()
 {
 	Clip fake_clip;
@@ -678,7 +746,7 @@
 	playlist_selection_changed();
 }
 
-void MainWindow::live_player_clip_progress(const map<uint64_t, double> &progress, double time_remaining)
+void MainWindow::live_player_clip_progress(const map<uint64_t, double> &progress, TimeRemaining time_remaining)
 {
 	playlist_clips->set_progress(progress);
 	set_output_status(format_duration(time_remaining) + " left");
@@ -707,6 +775,7 @@
 
 	if (event->type() == QEvent::FocusIn || event->type() == QEvent::FocusOut) {
 		enable_or_disable_preview_button();
+		playlist_selection_changed();
 		hidden_jog_column = -1;
 	}
 
@@ -775,6 +844,11 @@
 		if (mouse->modifiers() & Qt::KeyboardModifier::ShiftModifier) {
 			scrub_sensitivity *= 10;
 			wheel_sensitivity *= 10;
+			if (mouse->modifiers() & Qt::KeyboardModifier::ControlModifier) {
+				// Ctrl+Shift is a super-modifier, meant only for things like “go back two hours”.
+				scrub_sensitivity *= 100;
+				wheel_sensitivity *= 100;
+			}
 		}
 		if (mouse->modifiers() & Qt::KeyboardModifier::AltModifier) {  // Note: Shift + Alt cancel each other out.
 			scrub_sensitivity /= 10;
@@ -830,6 +904,11 @@
 		if (wheel->modifiers() & Qt::KeyboardModifier::ShiftModifier) {
 			scrub_sensitivity *= 10;
 			wheel_sensitivity *= 10;
+			if (wheel->modifiers() & Qt::KeyboardModifier::ControlModifier) {
+				// Ctrl+Shift is a super-modifier, meant only for things like “go back two hours”.
+				scrub_sensitivity *= 100;
+				wheel_sensitivity *= 100;
+			}
 		}
 		if (wheel->modifiers() & Qt::KeyboardModifier::AltModifier) {  // Note: Shift + Alt cancel each other out.
 			scrub_sensitivity /= 10;
@@ -915,6 +994,19 @@
 		any_selected && selected->selectedRows().back().row() < int(playlist_clips->size()) - 1);
 
 	ui->play_btn->setEnabled(any_selected);
+	ui->next_btn->setEnabled(ui->stop_btn->isEnabled());  // TODO: Perhaps not if we're on the last clip?
+	midi_mapper.set_next_ready(ui->next_btn->isEnabled() ? MIDIMapper::On : MIDIMapper::Off);
+
+	// NOTE: The hidden button is still reachable by keyboard or MIDI.
+	if (any_selected) {
+		ui->play_btn->setVisible(true);
+	} else if (ui->stop_btn->isEnabled()) {  // Playing.
+		ui->play_btn->setVisible(false);
+	} else {
+		ui->play_btn->setVisible(true);
+	}
+	ui->next_btn->setVisible(!ui->play_btn->isVisible());
+
 	if (ui->stop_btn->isEnabled()) {  // Playing.
 		midi_mapper.set_play_enabled(MIDIMapper::On);
 	} else if (any_selected) {
@@ -930,7 +1022,7 @@
 		for (size_t row = selected->selectedRows().front().row(); row < playlist_clips->size(); ++row) {
 			clips.emplace_back(*playlist_clips->clip_with_id(row));
 		}
-		double remaining = compute_total_time(clips);
+		TimeRemaining remaining = compute_total_time(clips);
 		set_output_status(format_duration(remaining) + " ready");
 	}
 }
@@ -1130,15 +1222,30 @@
 	save_settings();
 }
 
-void MainWindow::padding_toggled(double seconds, bool checked)
+void MainWindow::in_padding_toggled(double seconds, bool checked)
 {
 	if (!checked) {
 		return;
 	}
-	global_flags.cue_point_padding_seconds = seconds;
+	global_flags.cue_in_point_padding_seconds = seconds;
 	save_settings();
 }
 
+void MainWindow::out_padding_toggled(double seconds, bool checked)
+{
+	if (!checked) {
+		return;
+	}
+	global_flags.cue_out_point_padding_seconds = seconds;
+	save_settings();
+}
+
+void MainWindow::hide_camera_toggled(unsigned camera_idx, bool checked)
+{
+	displays[camera_idx].hidden = checked;
+	relayout_displays();
+}
+
 void MainWindow::highlight_camera_input(int stream_idx)
 {
 	for (unsigned i = 0; i < num_cameras; ++i) {
@@ -1185,15 +1292,7 @@
 	if (cliplist_clips->empty()) {
 		enabled = false;
 	} else {
-		QItemSelectionModel *selected = ui->clip_list->selectionModel();
-		if (!selected->hasSelection()) {
-			Clip clip = *cliplist_clips->back();
-			enabled = clip.pts_out != -1;
-		} else {
-			QModelIndex index = selected->currentIndex();
-			Clip clip = *cliplist_clips->clip(index.row());
-			enabled = clip.pts_out != -1;
-		}
+		enabled = true;
 	}
 
 	ui->queue_btn->setEnabled(enabled);
@@ -1253,6 +1352,13 @@
 	});
 }
 
+void MainWindow::next()
+{
+	post_to_main_thread([this] {
+		next_clicked();
+	});
+}
+
 void MainWindow::toggle_lock()
 {
 	post_to_main_thread([this] {
diff -Nru nageru-1.8.2/futatabi/mainwindow.h nageru-1.8.4/futatabi/mainwindow.h
--- nageru-1.8.2/futatabi/mainwindow.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/mainwindow.h	2019-03-11 23:40:21.000000000 +0100
@@ -4,6 +4,7 @@
 #include "clip_list.h"
 #include "db.h"
 #include "midi_mapper.h"
+#include "player.h"
 #include "state.pb.h"
 
 #include <QLabel>
@@ -43,6 +44,7 @@
 	void preview() override;
 	void queue() override;
 	void play() override;
+	void next() override;
 	void toggle_lock() override;
 	void jog(int delta) override;
 	void switch_camera(unsigned camera_idx) override;
@@ -118,6 +120,7 @@
 		QFrame *frame;
 		JPEGFrameView *display;
 		QPushButton *preview_btn;
+		bool hidden = false;
 	};
 	std::vector<FrameAndDisplay> displays;
 
@@ -128,18 +131,20 @@
 	MIDIMapper midi_mapper;
 
 	void change_num_cameras();
+	void relayout_displays();
 	void cue_in_clicked();
 	void cue_out_clicked();
 	void queue_clicked();
 	void preview_clicked();
 	void preview_angle_clicked(unsigned stream_idx);
 	void play_clicked();
+	void next_clicked();
 	void stop_clicked();
 	void speed_slider_changed(int percent);
 	void speed_lock_clicked();
 	void preview_player_done();
 	void live_player_done();
-	void live_player_clip_progress(const std::map<uint64_t, double> &progress, double time_remaining);
+	void live_player_clip_progress(const std::map<uint64_t, double> &progress, TimeRemaining time_remaining);
 	void set_output_status(const std::string &status);
 	void playlist_duplicate();
 	void playlist_remove();
@@ -177,7 +182,9 @@
 	void undo_triggered();
 	void redo_triggered();
 	void quality_toggled(int quality, bool checked);
-	void padding_toggled(double seconds, bool checked);
+	void in_padding_toggled(double seconds, bool checked);
+	void out_padding_toggled(double seconds, bool checked);
+	void hide_camera_toggled(unsigned camera_idx, bool checked);
 
 	void highlight_camera_input(int stream_idx);
 	void enable_or_disable_preview_button();
diff -Nru nageru-1.8.2/futatabi/mainwindow.ui nageru-1.8.4/futatabi/mainwindow.ui
--- nageru-1.8.2/futatabi/mainwindow.ui	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/mainwindow.ui	2019-03-11 23:40:21.000000000 +0100
@@ -6,7 +6,7 @@
    <rect>
     <x>0</x>
     <y>0</y>
-    <width>1038</width>
+    <width>1061</width>
     <height>600</height>
    </rect>
   </property>
@@ -211,6 +211,17 @@
             </property>
            </widget>
           </item>
+          <item>
+           <widget class="QPushButton" name="next_btn">
+            <property name="text">
+             <string>Next (N)</string>
+            </property>
+            <property name="icon">
+             <iconset theme="media-skip-forward">
+              <normaloff>.</normaloff>.</iconset>
+            </property>
+           </widget>
+          </item>
          </layout>
         </item>
        </layout>
@@ -354,7 +365,7 @@
     <rect>
      <x>0</x>
      <y>0</y>
-     <width>1038</width>
+     <width>1061</width>
      <height>22</height>
     </rect>
    </property>
@@ -379,17 +390,27 @@
      <addaction name="quality_3_action"/>
      <addaction name="quality_4_action"/>
     </widget>
-    <widget class="QMenu" name="padding_menu">
+    <widget class="QMenu" name="in_padding_menu">
+     <property name="title">
+      <string>Cue &amp;in point padding</string>
+     </property>
+     <addaction name="in_padding_0_action"/>
+     <addaction name="in_padding_1_action"/>
+     <addaction name="in_padding_2_action"/>
+     <addaction name="in_padding_5_action"/>
+    </widget>
+    <widget class="QMenu" name="out_padding_menu">
      <property name="title">
-      <string>Cue point &amp;padding</string>
+      <string>Cue &amp;out point padding</string>
      </property>
-     <addaction name="padding_0_action"/>
-     <addaction name="padding_1_action"/>
-     <addaction name="padding_2_action"/>
-     <addaction name="padding_5_action"/>
+     <addaction name="out_padding_0_action"/>
+     <addaction name="out_padding_1_action"/>
+     <addaction name="out_padding_2_action"/>
+     <addaction name="out_padding_5_action"/>
     </widget>
     <addaction name="interpolation_menu"/>
-    <addaction name="padding_menu"/>
+    <addaction name="in_padding_menu"/>
+    <addaction name="out_padding_menu"/>
     <addaction name="menu_Export"/>
     <addaction name="midi_mapping_action"/>
     <addaction name="exit_action"/>
@@ -409,7 +430,19 @@
     <addaction name="undo_action"/>
     <addaction name="redo_action"/>
    </widget>
+   <widget class="QMenu" name="view_menu">
+    <property name="title">
+     <string>V&amp;iew</string>
+    </property>
+    <widget class="QMenu" name="hide_camera_menu">
+     <property name="title">
+      <string>&amp;Hide camera</string>
+     </property>
+    </widget>
+    <addaction name="hide_camera_menu"/>
+   </widget>
    <addaction name="menuFile"/>
+   <addaction name="view_menu"/>
    <addaction name="menu_Edit"/>
    <addaction name="menu_Help"/>
   </widget>
@@ -494,7 +527,7 @@
     <string>Best (&amp;4) (not realtime on any current GPU)</string>
    </property>
   </action>
-  <action name="padding_0_action">
+  <action name="in_padding_0_action">
    <property name="checkable">
     <bool>true</bool>
    </property>
@@ -502,7 +535,7 @@
     <string>&amp;0 seconds</string>
    </property>
   </action>
-  <action name="padding_1_action">
+  <action name="in_padding_1_action">
    <property name="checkable">
     <bool>true</bool>
    </property>
@@ -510,7 +543,7 @@
     <string>&amp;1 second</string>
    </property>
   </action>
-  <action name="padding_2_action">
+  <action name="in_padding_2_action">
    <property name="checkable">
     <bool>true</bool>
    </property>
@@ -518,7 +551,7 @@
     <string>&amp;2 seconds</string>
    </property>
   </action>
-  <action name="padding_5_action">
+  <action name="in_padding_5_action">
    <property name="checkable">
     <bool>true</bool>
    </property>
@@ -531,6 +564,38 @@
     <string>Setup MIDI controller…</string>
    </property>
   </action>
+  <action name="out_padding_0_action">
+   <property name="checkable">
+    <bool>true</bool>
+   </property>
+   <property name="text">
+    <string>&amp;0 seconds</string>
+   </property>
+  </action>
+  <action name="out_padding_1_action">
+   <property name="checkable">
+    <bool>true</bool>
+   </property>
+   <property name="text">
+    <string>&amp;1 seconds</string>
+   </property>
+  </action>
+  <action name="out_padding_2_action">
+   <property name="checkable">
+    <bool>true</bool>
+   </property>
+   <property name="text">
+    <string>&amp;2 seconds</string>
+   </property>
+  </action>
+  <action name="out_padding_5_action">
+   <property name="checkable">
+    <bool>true</bool>
+   </property>
+   <property name="text">
+    <string>&amp;5 seconds</string>
+   </property>
+  </action>
  </widget>
  <customwidgets>
   <customwidget>
diff -Nru nageru-1.8.2/futatabi/midi_mapper.cpp nageru-1.8.4/futatabi/midi_mapper.cpp
--- nageru-1.8.2/futatabi/midi_mapper.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/midi_mapper.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -143,6 +143,8 @@
 		bind(&ControllerReceiver::queue, receiver));
 	match_button(note, MIDIMappingProto::kPlayFieldNumber, MIDIMappingProto::kPlayBankFieldNumber,
 		bind(&ControllerReceiver::play, receiver));
+	match_button(note, MIDIMappingProto::kNextFieldNumber, MIDIMappingProto::kNextButtonBankFieldNumber,
+		bind(&ControllerReceiver::next, receiver));
 	match_button(note, MIDIMappingProto::kToggleLockFieldNumber, MIDIMappingProto::kToggleLockBankFieldNumber,
 		bind(&ControllerReceiver::toggle_lock, receiver));
 
@@ -236,6 +238,9 @@
 	} else if (play_enabled_light == Blinking) {  // Play ready.
 		activate_mapped_light(*mapping_proto, MIDIMappingProto::kPlayReadyFieldNumber, &active_lights);
 	}
+	if (next_ready_light == On) {
+		activate_mapped_light(*mapping_proto, MIDIMappingProto::kNextReadyFieldNumber, &active_lights);
+	}
 	if (locked_light == On) {
 		activate_mapped_light(*mapping_proto, MIDIMappingProto::kLockedFieldNumber, &active_lights);
 	} else if (locked_light == Blinking) {
diff -Nru nageru-1.8.2/futatabi/midi_mapper.h nageru-1.8.4/futatabi/midi_mapper.h
--- nageru-1.8.2/futatabi/midi_mapper.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/midi_mapper.h	2019-03-11 23:40:21.000000000 +0100
@@ -31,6 +31,7 @@
 	virtual void preview() = 0;
 	virtual void queue() = 0;
 	virtual void play() = 0;
+	virtual void next() = 0;
 	virtual void toggle_lock() = 0;
 	virtual void jog(int delta) = 0;
 	virtual void switch_camera(unsigned camera_idx) = 0;
@@ -75,6 +76,10 @@
 		play_enabled_light = enabled;
 		refresh_lights();
 	}
+	void set_next_ready(LightState enabled) {
+		next_ready_light = enabled;
+		refresh_lights();
+	}
 	void set_locked(LightState locked) {
 		locked_light = locked;
 		refresh_lights();
@@ -112,6 +117,7 @@
 	std::atomic<LightState> preview_enabled_light{Off};
 	std::atomic<bool> queue_enabled_light{false};
 	std::atomic<LightState> play_enabled_light{Off};
+	std::atomic<LightState> next_ready_light{Off};
 	std::atomic<LightState> locked_light{On};
 	std::atomic<int> current_highlighted_camera{-1};
 	std::atomic<float> current_speed{1.0f};
diff -Nru nageru-1.8.2/futatabi/midi_mapping_dialog.cpp nageru-1.8.4/futatabi/midi_mapping_dialog.cpp
--- nageru-1.8.2/futatabi/midi_mapping_dialog.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/midi_mapping_dialog.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -46,6 +46,8 @@
 	                  MIDIMappingProto::kQueueBankFieldNumber },
 	{ "Play",         MIDIMappingProto::kPlayFieldNumber,
 	                  MIDIMappingProto::kPlayBankFieldNumber },
+	{ "Next",         MIDIMappingProto::kNextFieldNumber,
+	                  MIDIMappingProto::kNextButtonBankFieldNumber },
 	{ "Lock master speed", MIDIMappingProto::kToggleLockFieldNumber,
 	                  MIDIMappingProto::kToggleLockBankFieldNumber },
 	{ "Cue in",       MIDIMappingProto::kCueInFieldNumber,
@@ -66,6 +68,7 @@
         { "Queue button enabled", MIDIMappingProto::kQueueEnabledFieldNumber, 0 },
         { "Playing",              MIDIMappingProto::kPlayingFieldNumber, 0 },
         { "Play ready",           MIDIMappingProto::kPlayReadyFieldNumber, 0 },
+        { "Next ready",           MIDIMappingProto::kNextReadyFieldNumber, 0 },
         { "Master speed locked",  MIDIMappingProto::kLockedFieldNumber, 0 },
         { "Master speed locked (blinking)",
 	                          MIDIMappingProto::kLockedBlinkingFieldNumber, 0 },
diff -Nru nageru-1.8.2/futatabi/midi_mapping_dialog.h nageru-1.8.4/futatabi/midi_mapping_dialog.h
--- nageru-1.8.2/futatabi/midi_mapping_dialog.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/midi_mapping_dialog.h	2019-03-11 23:40:21.000000000 +0100
@@ -43,6 +43,7 @@
 	void preview() override {}
 	void queue() override {}
 	void play() override {}
+	void next() override {}
 	void toggle_lock() override {}
 	void jog(int delta) override {}
 	void switch_camera(unsigned camera_idx) override {}
diff -Nru nageru-1.8.2/futatabi/player.cpp nageru-1.8.4/futatabi/player.cpp
--- nageru-1.8.2/futatabi/player.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/player.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -151,6 +151,7 @@
 		return;
 	}
 
+	should_skip_to_next = false;  // To make sure we don't have a lingering click from before play.
 	steady_clock::time_point origin = steady_clock::now();  // TODO: Add a 100 ms buffer for ramp-up?
 	int64_t in_pts_origin = clip_list[0].clip.pts_in;
 	for (size_t clip_idx = 0; clip_idx < clip_list.size(); ++clip_idx) {
@@ -181,7 +182,7 @@
 		}
 
 		steady_clock::time_point next_frame_start;
-		for (int frameno = 0; !should_quit; ++frameno) {  // Ends when the clip ends.
+		for (int64_t frameno = 0; !should_quit; ++frameno) {  // Ends when the clip ends.
 			double out_pts = out_pts_origin + TIMEBASE * frameno / global_flags.output_framerate;
 			next_frame_start =
 				origin + microseconds(lrint((out_pts - out_pts_origin) * 1e6 / TIMEBASE));
@@ -195,6 +196,11 @@
 				out_pts_origin = out_pts - TIMEBASE * frameno / global_flags.output_framerate;
 			}
 
+			if (should_skip_to_next.exchange(false)) {  // Test and clear.
+				Clip *clip = &clip_list[clip_idx].clip;  // Get a non-const pointer.
+				clip->pts_out = std::min<int64_t>(clip->pts_out, llrint(in_pts + clip->fade_time_seconds * clip->speed * TIMEBASE));
+			}
+
 			if (in_pts >= clip->pts_out) {
 				break;
 			}
@@ -265,7 +271,7 @@
 			// NOTE: None of this will take into account any snapping done below.
 			double clip_progress = calc_progress(*clip, in_pts_for_progress);
 			map<uint64_t, double> progress{ { clip_list[clip_idx].id, clip_progress } };
-			double time_remaining;
+			TimeRemaining time_remaining;
 			if (next_clip != nullptr && time_left_this_clip <= next_clip_fade_time) {
 				double next_clip_progress = calc_progress(*next_clip, in_pts_secondary_for_progress);
 				progress[clip_list[clip_idx + 1].id] = next_clip_progress;
@@ -333,7 +339,7 @@
 				ss.imbue(locale("C"));
 				ss.precision(3);
 				ss << "Futatabi " NAGERU_VERSION ";PLAYING;";
-				ss << fixed << time_remaining;
+				ss << fixed << (time_remaining.num_infinite * 86400.0 + time_remaining.t);
 				ss << ";" << format_duration(time_remaining) << " left";
 				subtitle = ss.str();
 			}
@@ -350,7 +356,7 @@
 			// Snap to input frame: If we can do so with less than 1% jitter
 			// (ie., move less than 1% of an _output_ frame), do so.
 			// TODO: Snap secondary (fade-to) clips in the same fashion.
-			double pts_snap_tolerance = 0.01 * double(TIMEBASE) / global_flags.output_framerate;
+			double pts_snap_tolerance = 0.01 * double(TIMEBASE) * clip->speed / global_flags.output_framerate;
 			bool snapped = false;
 			for (FrameOnDisk snap_frame : { frame_lower, frame_upper }) {
 				if (fabs(snap_frame.pts - in_pts) < pts_snap_tolerance) {
@@ -596,30 +602,34 @@
 	new_clip_changed.notify_all();
 }
 
-double compute_time_left(const vector<ClipWithID> &clips, size_t currently_playing_idx, double progress_currently_playing)
+TimeRemaining compute_time_left(const vector<ClipWithID> &clips, size_t currently_playing_idx, double progress_currently_playing)
 {
 	// Look at the last clip and then start counting from there.
-	double remaining = 0.0;
+	TimeRemaining remaining { 0, 0.0 };
 	double last_fade_time_seconds = 0.0;
 	for (size_t row = currently_playing_idx; row < clips.size(); ++row) {
 		const Clip &clip = clips[row].clip;
 		double clip_length = double(clip.pts_out - clip.pts_in) / TIMEBASE / clip.speed;
-		if (row == currently_playing_idx) {
-			// A clip we're playing: Subtract the part we've already played.
-			remaining = clip_length * (1.0 - progress_currently_playing);
+		if (clip_length >= 86400.0) {  // More than one day.
+			++remaining.num_infinite;
 		} else {
-			// A clip we haven't played yet: Subtract the part that's overlapping
-			// with a previous clip (due to fade).
-			remaining += max(clip_length - last_fade_time_seconds, 0.0);
+			if (row == currently_playing_idx) {
+				// A clip we're playing: Subtract the part we've already played.
+				remaining.t = clip_length * (1.0 - progress_currently_playing);
+			} else {
+				// A clip we haven't played yet: Subtract the part that's overlapping
+				// with a previous clip (due to fade).
+				remaining.t += max(clip_length - last_fade_time_seconds, 0.0);
+			}
 		}
 		last_fade_time_seconds = min(clip_length, clip.fade_time_seconds);
 	}
 	return remaining;
 }
 
-string format_duration(double t)
+string format_duration(TimeRemaining t)
 {
-	int t_ms = lrint(t * 1e3);
+	int t_ms = lrint(t.t * 1e3);
 
 	int ms = t_ms % 1000;
 	t_ms /= 1000;
@@ -628,6 +638,16 @@
 	int m = t_ms;
 
 	char buf[256];
-	snprintf(buf, sizeof(buf), "%d:%02d.%03d", m, s, ms);
+	if (t.num_infinite > 1 && t.t > 0.0) {
+		snprintf(buf, sizeof(buf), "%zu clips + %d:%02d.%03d", t.num_infinite, m, s, ms);
+	} else if (t.num_infinite > 1) {
+		snprintf(buf, sizeof(buf), "%zu clips", t.num_infinite);
+	} else if (t.num_infinite == 1 && t.t > 0.0) {
+		snprintf(buf, sizeof(buf), "%zu clip + %d:%02d.%03d", t.num_infinite, m, s, ms);
+	} else if (t.num_infinite == 1) {
+		snprintf(buf, sizeof(buf), "%zu clip", t.num_infinite);
+	} else {
+		snprintf(buf, sizeof(buf), "%d:%02d.%03d", m, s, ms);
+	}
 	return buf;
 }
diff -Nru nageru-1.8.2/futatabi/player.h nageru-1.8.4/futatabi/player.h
--- nageru-1.8.2/futatabi/player.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/player.h	2019-03-11 23:40:21.000000000 +0100
@@ -20,6 +20,11 @@
 class QSurface;
 class QSurfaceFormat;
 
+struct TimeRemaining {
+	size_t num_infinite;
+	double t;
+};
+
 class Player : public QueueInterface {
 public:
 	enum StreamOutput {
@@ -61,6 +66,11 @@
 		pause_status = status;
 	}
 
+	void skip_to_next()
+	{
+		should_skip_to_next = true;
+	}
+
 	void set_master_speed(float speed)
 	{
 		change_master_speed = speed;
@@ -74,7 +84,7 @@
 	// Not thread-safe to set concurrently with playing.
 	// Will be called back from the player thread.
 	// The keys in the given map are row members in the vector given to play().
-	using progress_callback_func = std::function<void(const std::map<uint64_t, double> &progress, double time_remaining)>;
+	using progress_callback_func = std::function<void(const std::map<uint64_t, double> &progress, TimeRemaining time_remaining)>;
 	void set_progress_callback(progress_callback_func cb) { progress_callback = cb; }
 
 	// QueueInterface.
@@ -95,6 +105,7 @@
 
 	std::thread player_thread;
 	std::atomic<bool> should_quit{ false };
+	std::atomic<bool> should_skip_to_next{ false };
 	std::atomic<float> change_master_speed{ 0.0f / 0.0f };
 
 	JPEGFrameView *destination;
@@ -135,13 +146,13 @@
 	const StreamOutput stream_output;
 };
 
-double compute_time_left(const std::vector<ClipWithID> &clips, size_t currently_playing_idx, double progress_currently_playing);
+TimeRemaining compute_time_left(const std::vector<ClipWithID> &clips, size_t currently_playing_idx, double progress_currently_playing);
 
-static inline double compute_total_time(const std::vector<ClipWithID> &clips)
+static inline TimeRemaining compute_total_time(const std::vector<ClipWithID> &clips)
 {
 	return compute_time_left(clips, 0, 0.0);
 }
 
-std::string format_duration(double t);
+std::string format_duration(TimeRemaining t);
 
 #endif  // !defined(_PLAYER_H)
diff -Nru nageru-1.8.2/futatabi/state.proto nageru-1.8.4/futatabi/state.proto
--- nageru-1.8.2/futatabi/state.proto	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/state.proto	2019-03-11 23:40:21.000000000 +0100
@@ -21,5 +21,6 @@
 
 message SettingsProto {
 	int32 interpolation_quality = 1;  // 0 = unset, 1 = quality 0, 2 = quality 1, etc.
-	double cue_point_padding_seconds = 2;
+	double cue_in_point_padding_seconds = 2;
+	double cue_out_point_padding_seconds = 3;
 }
diff -Nru nageru-1.8.2/futatabi/video_stream.cpp nageru-1.8.4/futatabi/video_stream.cpp
--- nageru-1.8.2/futatabi/video_stream.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/video_stream.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -28,7 +28,7 @@
 
 struct VectorDestinationManager {
 	jpeg_destination_mgr pub;
-	std::vector<uint8_t> dest;
+	string dest;
 
 	VectorDestinationManager()
 	{
@@ -62,7 +62,7 @@
 	{
 		dest.resize(bytes_used + 4096);
 		dest.resize(dest.capacity());
-		pub.next_output_byte = dest.data() + bytes_used;
+		pub.next_output_byte = (uint8_t *)dest.data() + bytes_used;
 		pub.free_in_buffer = dest.size() - bytes_used;
 	}
 
@@ -78,7 +78,7 @@
 };
 static_assert(std::is_standard_layout<VectorDestinationManager>::value, "");
 
-vector<uint8_t> encode_jpeg(const uint8_t *y_data, const uint8_t *cb_data, const uint8_t *cr_data, unsigned width, unsigned height)
+string encode_jpeg(const uint8_t *y_data, const uint8_t *cb_data, const uint8_t *cr_data, unsigned width, unsigned height)
 {
 	VectorDestinationManager dest;
 
@@ -333,7 +333,7 @@
                                           QueueSpotHolder &&queue_spot_holder,
                                           FrameOnDisk frame, const string &subtitle)
 {
-	fprintf(stderr, "output_pts=%ld  original      input_pts=%ld\n", output_pts, frame.pts);
+	fprintf(stderr, "output_pts=%" PRId64 "  original      input_pts=%" PRId64 "\n", output_pts, frame.pts);
 
 	QueuedFrame qf;
 	qf.local_pts = local_pts;
@@ -355,7 +355,7 @@
                                        FrameOnDisk frame1_spec, FrameOnDisk frame2_spec,
                                        float fade_alpha, const string &subtitle)
 {
-	fprintf(stderr, "output_pts=%ld  faded         input_pts=%ld,%ld  fade_alpha=%.2f\n", output_pts, frame1_spec.pts, frame2_spec.pts, fade_alpha);
+	fprintf(stderr, "output_pts=%" PRId64 "  faded         input_pts=%" PRId64 ",%" PRId64 "  fade_alpha=%.2f\n", output_pts, frame1_spec.pts, frame2_spec.pts, fade_alpha);
 
 	// Get the temporary OpenGL resources we need for doing the fade.
 	// (We share these with interpolated frames, which is slightly
@@ -425,9 +425,9 @@
                                               float alpha, FrameOnDisk secondary_frame, float fade_alpha, const string &subtitle)
 {
 	if (secondary_frame.pts != -1) {
-		fprintf(stderr, "output_pts=%ld  interpolated  input_pts1=%ld input_pts2=%ld alpha=%.3f  secondary_pts=%ld  fade_alpha=%.2f\n", output_pts, frame1.pts, frame2.pts, alpha, secondary_frame.pts, fade_alpha);
+		fprintf(stderr, "output_pts=%" PRId64 "  interpolated  input_pts1=%" PRId64 " input_pts2=%" PRId64 " alpha=%.3f  secondary_pts=%" PRId64 "  fade_alpha=%.2f\n", output_pts, frame1.pts, frame2.pts, alpha, secondary_frame.pts, fade_alpha);
 	} else {
-		fprintf(stderr, "output_pts=%ld  interpolated  input_pts1=%ld input_pts2=%ld alpha=%.3f\n", output_pts, frame1.pts, frame2.pts, alpha);
+		fprintf(stderr, "output_pts=%" PRId64 "  interpolated  input_pts1=%" PRId64 " input_pts2=%" PRId64 " alpha=%.3f\n", output_pts, frame1.pts, frame2.pts, alpha);
 	}
 
 	// Get the temporary OpenGL resources we need for doing the interpolation.
@@ -659,15 +659,14 @@
 			pkt.size = jpeg.size();
 			pkt.flags = AV_PKT_FLAG_KEY;
 			mux->add_packet(pkt, qf.output_pts, qf.output_pts);
-
-			last_frame.assign(&jpeg[0], &jpeg[0] + jpeg.size());
+			last_frame = move(jpeg);
 		} else if (qf.type == QueuedFrame::FADED) {
 			glClientWaitSync(qf.fence.get(), /*flags=*/0, GL_TIMEOUT_IGNORED);
 
 			shared_ptr<Frame> frame = frame_from_pbo(qf.resources->pbo_contents, global_flags.width, global_flags.height);
 
 			// Now JPEG encode it, and send it on to the stream.
-			vector<uint8_t> jpeg = encode_jpeg(frame->y.get(), frame->cb.get(), frame->cr.get(), global_flags.width, global_flags.height);
+			string jpeg = encode_jpeg(frame->y.get(), frame->cb.get(), frame->cr.get(), global_flags.width, global_flags.height);
 
 			AVPacket pkt;
 			av_init_packet(&pkt);
@@ -687,7 +686,7 @@
 			}
 
 			// Now JPEG encode it, and send it on to the stream.
-			vector<uint8_t> jpeg = encode_jpeg(frame->y.get(), frame->cb.get(), frame->cr.get(), global_flags.width, global_flags.height);
+			string jpeg = encode_jpeg(frame->y.get(), frame->cb.get(), frame->cr.get(), global_flags.width, global_flags.height);
 			if (qf.flow_tex != 0) {
 				compute_flow->release_texture(qf.flow_tex);
 			}
diff -Nru nageru-1.8.2/futatabi/video_stream.h nageru-1.8.4/futatabi/video_stream.h
--- nageru-1.8.2/futatabi/video_stream.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/futatabi/video_stream.h	2019-03-11 23:40:21.000000000 +0100
@@ -114,8 +114,6 @@
 
 		// For original frames only. Made move-only so we know explicitly
 		// we don't copy these ~200 kB files around inadvertedly.
-		//
-		// TODO: Consider using vector<uint8_t> instead, so we save one copy.
 		std::unique_ptr<std::string> encoded_jpeg;
 
 		// For everything except original frames.
@@ -160,7 +158,7 @@
 	GLuint last_flow_tex = 0;
 	FrameOnDisk last_frame1, last_frame2;
 
-	std::vector<uint8_t> last_frame;
+	std::string last_frame;
 };
 
 #endif  // !defined(_VIDEO_STREAM_H)
diff -Nru nageru-1.8.2/meson.build nageru-1.8.4/meson.build
--- nageru-1.8.2/meson.build	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/meson.build	2019-03-11 23:40:21.000000000 +0100
@@ -1,4 +1,4 @@
-project('nageru', 'cpp', default_options: ['buildtype=debugoptimized'], version: '1.8.2')
+project('nageru', 'cpp', default_options: ['buildtype=debugoptimized'], version: '1.8.4')
 
 cxx = meson.get_compiler('cpp')
 qt5 = import('qt5')
@@ -94,7 +94,7 @@
 		nageru_install_rpath = '$ORIGIN/'
 	endif
 
-	cefdep = cxx.find_library('cef')
+	cefdep = cxx.find_library('cef', dirs: cef_lib_dir)
 	nageru_deps += cefdep
 
 	# CEF wrapper library; not built as part of the CEF binary distribution,
@@ -104,7 +104,7 @@
 		nageru_deps += cefdlldep
 	else
 		cmake = find_program('cmake')
-		cef_compile_script = find_program('scripts/compile_cef_dll_wrapper.sh')
+		cef_compile_script = find_program('nageru/scripts/compile_cef_dll_wrapper.sh')
 
 		cef_dll_target = custom_target('libcef_dll_wrapper',
 			input: join_paths(cef_dir, 'libcef_dll/CMakeLists.txt'),
diff -Nru nageru-1.8.2/nageru/alsa_input.cpp nageru-1.8.4/nageru/alsa_input.cpp
--- nageru-1.8.2/nageru/alsa_input.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/alsa_input.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -111,6 +111,9 @@
 	snd_pcm_sw_params_alloca(&sw_params);
 	RETURN_FALSE_ON_ERROR("snd_pcm_sw_params_current()", snd_pcm_sw_params_current(pcm_handle, sw_params));
 	RETURN_FALSE_ON_ERROR("snd_pcm_sw_params_set_start_threshold", snd_pcm_sw_params_set_start_threshold(pcm_handle, sw_params, num_periods * period_size / 2));
+	RETURN_FALSE_ON_ERROR("snd_pcm_sw_params_set_tstamp_mode", snd_pcm_sw_params_set_tstamp_mode(pcm_handle, sw_params, SND_PCM_TSTAMP_ENABLE));
+	RETURN_FALSE_ON_ERROR("snd_pcm_sw_params_set_tstamp_type", snd_pcm_sw_params_set_tstamp_type(pcm_handle, sw_params, SND_PCM_TSTAMP_TYPE_MONOTONIC));
+
 	RETURN_FALSE_ON_ERROR("snd_pcm_sw_params()", snd_pcm_sw_params(pcm_handle, sw_params));
 
 	RETURN_FALSE_ON_ERROR("snd_pcm_nonblock()", snd_pcm_nonblock(pcm_handle, 1));
@@ -171,6 +174,14 @@
 
 void ALSAInput::capture_thread_func()
 {
+	if (!done_init) {
+		char thread_name[16];
+		snprintf(thread_name, sizeof(thread_name), "ALSA_C_%d", internal_dev_index);
+		pthread_setname_np(pthread_self(), thread_name);
+
+		done_init = true;
+	}
+
 	parent_pool->set_card_state(internal_dev_index, ALSAPool::Device::State::STARTING);
 
 	// If the device hasn't been opened already, we need to do so
@@ -221,7 +232,8 @@
 	RETURN_ON_ERROR("snd_pcm_start()", snd_pcm_start(pcm_handle));
 	parent_pool->set_card_state(internal_dev_index, ALSAPool::Device::State::RUNNING);
 
-	uint64_t num_frames_output = 0;
+	snd_pcm_status_t *status;
+	snd_pcm_status_alloca(&status);
 	while (!should_quit.should_quit()) {
 		int ret = snd_pcm_wait(pcm_handle, /*timeout=*/100);
 		if (ret == 0) continue;  // Timeout.
@@ -233,7 +245,14 @@
 		}
 		RETURN_ON_ERROR("snd_pcm_wait()", ret);
 
-		snd_pcm_sframes_t frames = snd_pcm_readi(pcm_handle, buffer.get(), buffer_frames);
+		ret = snd_pcm_status(pcm_handle, status);
+		RETURN_ON_ERROR("snd_pcm_status()", ret);
+
+		snd_pcm_sframes_t avail = snd_pcm_status_get_avail(status);
+		snd_htimestamp_t alsa_ts;
+		snd_pcm_status_get_htstamp(status, &alsa_ts);
+
+		snd_pcm_sframes_t frames = snd_pcm_readi(pcm_handle, buffer.get(), avail);
 		if (frames == -EPIPE) {
 			fprintf(stderr, "[%s] ALSA overrun\n", device.c_str());
 			snd_pcm_prepare(pcm_handle);
@@ -246,21 +265,13 @@
 		}
 		RETURN_ON_ERROR("snd_pcm_readi()", frames);
 
-		const int64_t prev_pts = frames_to_pts(num_frames_output);
-		const int64_t pts = frames_to_pts(num_frames_output + frames);
-		const steady_clock::time_point now = steady_clock::now();
+		// NOTE: This assumes steady_clock::time_point is the same as clock_gettime(CLOCK_MONOTONIC).
+		const steady_clock::time_point ts = steady_clock::time_point(seconds(alsa_ts.tv_sec) + nanoseconds(alsa_ts.tv_nsec));
 		bool success;
 		do {
 			if (should_quit.should_quit()) return CaptureEndReason::REQUESTED_QUIT;
-			success = audio_callback(buffer.get(), frames, audio_format, pts - prev_pts, now);
+			success = audio_callback(buffer.get(), frames, audio_format, ts);
 		} while (!success);
-		num_frames_output += frames;
 	}
 	return CaptureEndReason::REQUESTED_QUIT;
 }
-
-int64_t ALSAInput::frames_to_pts(uint64_t n) const
-{
-	return (n * TIMEBASE) / sample_rate;
-}
-
diff -Nru nageru-1.8.2/nageru/alsa_input.h nageru-1.8.4/nageru/alsa_input.h
--- nageru-1.8.2/nageru/alsa_input.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/alsa_input.h	2019-03-11 23:40:21.000000000 +0100
@@ -5,9 +5,7 @@
 // in callbacks.
 //
 // Note: “frame” here generally refers to the ALSA definition of frame,
-// which is a set of samples, exactly one for each channel. The only exception
-// is in frame_length, where it means the TIMEBASE length of the buffer
-// as a whole, since that's what AudioMixer::add_audio() wants.
+// which is a set of samples, exactly one for each channel.
 
 #include <alsa/asoundlib.h>
 #include <stdint.h>
@@ -26,7 +24,7 @@
 
 class ALSAInput {
 public:
-	typedef std::function<bool(const uint8_t *data, unsigned num_samples, bmusb::AudioFormat audio_format, int64_t frame_length, std::chrono::steady_clock::time_point ts)> audio_callback_t;
+	typedef std::function<bool(const uint8_t *data, unsigned num_samples, bmusb::AudioFormat audio_format, std::chrono::steady_clock::time_point ts)> audio_callback_t;
 
 	ALSAInput(const char *device, unsigned sample_rate, unsigned num_channels, audio_callback_t audio_callback, ALSAPool *parent_pool, unsigned internal_dev_index);
 	~ALSAInput();
@@ -50,8 +48,8 @@
 	static bool set_base_params(const char *device_name, snd_pcm_t *pcm_handle, snd_pcm_hw_params_t *hw_params, unsigned *sample_rate);
 
 private:
+	bool done_init = false;
 	void capture_thread_func();
-	int64_t frames_to_pts(uint64_t n) const;
 
 	enum class CaptureEndReason {
 		REQUESTED_QUIT,
diff -Nru nageru-1.8.2/nageru/alsa_pool.cpp nageru-1.8.4/nageru/alsa_pool.cpp
--- nageru-1.8.2/nageru/alsa_pool.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/alsa_pool.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -402,7 +402,7 @@
 		inputs[index].reset();
 	} else {
 		// TODO: Put on a background thread instead of locking?
-		auto callback = bind(&AudioMixer::add_audio, global_audio_mixer, DeviceSpec{InputSourceType::ALSA_INPUT, index}, _1, _2, _3, _4, _5);
+		auto callback = bind(&AudioMixer::add_audio, global_audio_mixer, DeviceSpec{InputSourceType::ALSA_INPUT, index}, _1, _2, _3, _4);
 		inputs[index].reset(new ALSAInput(device->address.c_str(), OUTPUT_FREQUENCY, device->num_channels, callback, this, index));
 		inputs[index]->start_capture_thread();
 	}
diff -Nru nageru-1.8.2/nageru/audio_encoder.cpp nageru-1.8.4/nageru/audio_encoder.cpp
--- nageru-1.8.2/nageru/audio_encoder.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/audio_encoder.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -115,7 +115,7 @@
 	audio_frame->sample_rate = OUTPUT_FREQUENCY;
 
 	if (av_samples_alloc(audio_frame->data, nullptr, 2, num_samples, ctx->sample_fmt, 0) < 0) {
-		fprintf(stderr, "Could not allocate %ld samples.\n", num_samples);
+		fprintf(stderr, "Could not allocate %zu samples.\n", num_samples);
 		exit(1);
 	}
 
diff -Nru nageru-1.8.2/nageru/audio_mixer.cpp nageru-1.8.4/nageru/audio_mixer.cpp
--- nageru-1.8.2/nageru/audio_mixer.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/audio_mixer.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -244,7 +244,7 @@
 	}
 }
 
-bool AudioMixer::add_audio(DeviceSpec device_spec, const uint8_t *data, unsigned num_samples, AudioFormat audio_format, int64_t frame_length, steady_clock::time_point frame_time)
+bool AudioMixer::add_audio(DeviceSpec device_spec, const uint8_t *data, unsigned num_samples, AudioFormat audio_format, steady_clock::time_point frame_time)
 {
 	AudioDevice *device = find_audio_device(device_spec);
 
@@ -294,7 +294,7 @@
 	return true;
 }
 
-bool AudioMixer::add_silence(DeviceSpec device_spec, unsigned samples_per_frame, unsigned num_frames, int64_t frame_length)
+bool AudioMixer::add_silence(DeviceSpec device_spec, unsigned samples_per_frame, unsigned num_frames)
 {
 	AudioDevice *device = find_audio_device(device_spec);
 
diff -Nru nageru-1.8.2/nageru/audio_mixer.h nageru-1.8.4/nageru/audio_mixer.h
--- nageru-1.8.2/nageru/audio_mixer.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/audio_mixer.h	2019-03-11 23:40:21.000000000 +0100
@@ -54,9 +54,9 @@
 	// the lock wasn't successfully taken; if so, you should simply try again.
 	// (This is to avoid a deadlock where a card hangs on the mutex in add_audio()
 	// while we are trying to shut it down from another thread that also holds
-	// the mutex.) frame_length is in TIMEBASE units.
-	bool add_audio(DeviceSpec device_spec, const uint8_t *data, unsigned num_samples, bmusb::AudioFormat audio_format, int64_t frame_length, std::chrono::steady_clock::time_point frame_time);
-	bool add_silence(DeviceSpec device_spec, unsigned samples_per_frame, unsigned num_frames, int64_t frame_length);
+	// the mutex.)
+	bool add_audio(DeviceSpec device_spec, const uint8_t *data, unsigned num_samples, bmusb::AudioFormat audio_format, std::chrono::steady_clock::time_point frame_time);
+	bool add_silence(DeviceSpec device_spec, unsigned samples_per_frame, unsigned num_frames);
 
 	// If a given device is offline for whatever reason and cannot deliver audio
 	// (by means of add_audio() or add_silence()), you can call put it in silence mode,
diff -Nru nageru-1.8.2/nageru/benchmark_audio_mixer.cpp nageru-1.8.4/nageru/benchmark_audio_mixer.cpp
--- nageru-1.8.2/nageru/benchmark_audio_mixer.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/benchmark_audio_mixer.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -73,7 +73,7 @@
 		unsigned num_samples = NUM_SAMPLES + (lcgrand() % 9) - 5;
 		bool ok = mixer->add_audio(DeviceSpec{InputSourceType::CAPTURE_CARD, card_index},
 			card_index == 3 ? samples24 : samples16, num_samples, audio_format,
-			NUM_SAMPLES * TIMEBASE / OUTPUT_FREQUENCY, ts);
+			ts);
 		assert(ok);
 	}
 
@@ -162,7 +162,7 @@
 
 	double elapsed = duration<double>(end - start).count();
 	double simulated = double(out_samples) / (OUTPUT_FREQUENCY * 2);
-	printf("%ld samples produced in %.1f ms (%.1f%% CPU, %.1fx realtime).\n",
+	printf("%zu samples produced in %.1f ms (%.1f%% CPU, %.1fx realtime).\n",
 		out_samples, elapsed * 1e3, 100.0 * elapsed / simulated, simulated / elapsed);
 }
 
diff -Nru nageru-1.8.2/nageru/cef_capture.cpp nageru-1.8.4/nageru/cef_capture.cpp
--- nageru-1.8.2/nageru/cef_capture.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/cef_capture.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -246,16 +246,15 @@
 	parent->OnPaint(buffer, width, height);
 }
 
-bool NageruCEFClient::GetViewRect(CefRefPtr<CefBrowser> browser, CefRect &rect)
+void NageruCEFClient::GetViewRect(CefRefPtr<CefBrowser> browser, CefRect &rect)
 {
-	return parent->GetViewRect(rect);
+	parent->GetViewRect(rect);
 }
 
-bool CEFCapture::GetViewRect(CefRect &rect)
+void CEFCapture::GetViewRect(CefRect &rect)
 {
 	lock_guard<mutex> lock(resolution_mutex);
 	rect = CefRect(0, 0, width, height);
-	return true;
 }
 
 void NageruCEFClient::OnLoadEnd(CefRefPtr<CefBrowser> browser, CefRefPtr<CefFrame> frame, int httpStatusCode)
diff -Nru nageru-1.8.2/nageru/cef_capture.h nageru-1.8.4/nageru/cef_capture.h
--- nageru-1.8.2/nageru/cef_capture.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/cef_capture.h	2019-03-11 23:40:21.000000000 +0100
@@ -52,7 +52,7 @@
 
 	void OnPaint(CefRefPtr<CefBrowser> browser, PaintElementType type, const RectList &dirtyRects, const void *buffer, int width, int height) override;
 
-	bool GetViewRect(CefRefPtr<CefBrowser> browser, CefRect &rect) override;
+	void GetViewRect(CefRefPtr<CefBrowser> browser, CefRect &rect) override;
 
 	// CefLoadHandler.
 
@@ -89,7 +89,7 @@
 
 	// Callbacks from NageruCEFClient.
 	void OnPaint(const void *buffer, int width, int height);
-	bool GetViewRect(CefRect &rect);
+	void GetViewRect(CefRect &rect);
 	void OnLoadEnd();
 
 	// CaptureInterface.
diff -Nru nageru-1.8.2/nageru/decklink_output.cpp nageru-1.8.4/nageru/decklink_output.cpp
--- nageru-1.8.2/nageru/decklink_output.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/decklink_output.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -334,10 +334,10 @@
 	HRESULT result = output->ScheduleAudioSamples(int_samples.get(), samples.size() / 2,
 		pts, TIMEBASE, &frames_written);
 	if (result != S_OK) {
-		fprintf(stderr, "ScheduleAudioSamples(pts=%ld) failed (result=0x%08x)\n", pts, result);
+		fprintf(stderr, "ScheduleAudioSamples(pts=%" PRId64 ") failed (result=0x%08x)\n", pts, result);
 	} else {
 		if (frames_written != samples.size() / 2) {
-			fprintf(stderr, "ScheduleAudioSamples() returned short write (%u/%ld)\n", frames_written, samples.size() / 2);
+			fprintf(stderr, "ScheduleAudioSamples() returned short write (%u/%zu)\n", frames_written, samples.size() / 2);
 		}
 	}
 	metric_decklink_output_scheduled_samples += samples.size() / 2;
@@ -457,17 +457,17 @@
 		++metric_decklink_output_completed_frames_completed;
 		break;
 	case bmdOutputFrameDisplayedLate:
-		fprintf(stderr, "Output frame displayed late (pts=%ld)\n", frame->pts);
+		fprintf(stderr, "Output frame displayed late (pts=%" PRId64 ")\n", frame->pts);
 		fprintf(stderr, "Consider increasing --output-buffer-frames if this persists.\n");
 		++metric_decklink_output_completed_frames_late;
 		break;
 	case bmdOutputFrameDropped:
-		fprintf(stderr, "Output frame was dropped (pts=%ld)\n", frame->pts);
+		fprintf(stderr, "Output frame was dropped (pts=%" PRId64 "ld)\n", frame->pts);
 		fprintf(stderr, "Consider increasing --output-buffer-frames if this persists.\n");
 		++metric_decklink_output_completed_frames_dropped;
 		break;
 	case bmdOutputFrameFlushed:
-		fprintf(stderr, "Output frame was flushed (pts=%ld)\n", frame->pts);
+		fprintf(stderr, "Output frame was flushed (pts=%" PRId64 "ld)\n", frame->pts);
 		++metric_decklink_output_completed_frames_flushed;
 		break;
 	default:
diff -Nru nageru-1.8.2/nageru/ffmpeg_capture.cpp nageru-1.8.4/nageru/ffmpeg_capture.cpp
--- nageru-1.8.2/nageru/ffmpeg_capture.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/ffmpeg_capture.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -451,6 +451,10 @@
 		if (process_queued_commands(format_ctx.get(), pathname, last_modified, /*rewound=*/nullptr)) {
 			return true;
 		}
+		if (should_interrupt.load()) {
+			// Check as a failsafe, so that we don't need to rely on avio if we don't have to.
+			return false;
+		}
 		UniqueFrame audio_frame = audio_frame_allocator->alloc_frame();
 		AudioFormat audio_format;
 
@@ -463,6 +467,11 @@
 		}
 		if (frame == nullptr) {
 			// EOF. Loop back to the start if we can.
+			if (format_ctx->pb != nullptr && format_ctx->pb->seekable == 0) {
+				// Not seekable (but seemingly, sometimes av_seek_frame() would return 0 anyway,
+				// so don't try).
+				return true;
+			}
 			if (av_seek_frame(format_ctx.get(), /*stream_index=*/-1, /*timestamp=*/0, /*flags=*/0) < 0) {
 				fprintf(stderr, "%s: Rewind failed, not looping.\n", pathname.c_str());
 				return true;
@@ -494,56 +503,67 @@
 			if (last_pts == 0 && pts_origin == 0) {
 				pts_origin = frame->pts;	
 			}
-			next_frame_start = compute_frame_start(frame->pts, pts_origin, video_timebase, start, rate);
-			if (first_frame && last_frame_was_connected) {
-				// If reconnect took more than one second, this is probably a live feed,
-				// and we should reset the resampler. (Or the rate is really, really low,
-				// in which case a reset on the first frame is fine anyway.)
-				if (duration<double>(next_frame_start - last_frame).count() >= 1.0) {
-					last_frame_was_connected = false;
-				}
-			}
-			video_frame->received_timestamp = next_frame_start;
-
-			// The easiest way to get all the rate conversions etc. right is to move the
-			// audio PTS into the video PTS timebase and go from there. (We'll get some
-			// rounding issues, but they should not be a big problem.)
-			int64_t audio_pts_as_video_pts = av_rescale_q(audio_pts, audio_timebase, video_timebase);
-			audio_frame->received_timestamp = compute_frame_start(audio_pts_as_video_pts, pts_origin, video_timebase, start, rate);
-
-			if (audio_frame->len != 0) {
-				// The received timestamps in Nageru are measured after we've just received the frame.
-				// However, pts (especially audio pts) is at the _beginning_ of the frame.
-				// If we have locked audio, the distinction doesn't really matter, as pts is
-				// on a relative scale and a fixed offset is fine. But if we don't, we will have
-				// a different number of samples each time, which will cause huge audio jitter
-				// and throw off the resampler.
-				//
-				// In a sense, we should have compensated by adding the frame and audio lengths
-				// to video_frame->received_timestamp and audio_frame->received_timestamp respectively,
-				// but that would mean extra waiting in sleep_until(). All we need is that they
-				// are correct relative to each other, though (and to the other frames we send),
-				// so just align the end of the audio frame, and we're fine.
-				size_t num_samples = (audio_frame->len * 8) / audio_format.bits_per_sample / audio_format.num_channels;
-				double offset = double(num_samples) / OUTPUT_FREQUENCY -
-					double(video_format.frame_rate_den) / video_format.frame_rate_nom;
-				audio_frame->received_timestamp += duration_cast<steady_clock::duration>(duration<double>(offset));
-			}
-
 			steady_clock::time_point now = steady_clock::now();
-			if (duration<double>(now - next_frame_start).count() >= 0.1) {
-				// If we don't have enough CPU to keep up, or if we have a live stream
-				// where the initial origin was somehow wrong, we could be behind indefinitely.
-				// In particular, this will give the audio resampler problems as it tries
-				// to speed up to reduce the delay, hitting the low end of the buffer every time.
-				fprintf(stderr, "%s: Playback %.0f ms behind, resetting time scale\n",
-					pathname.c_str(),
-					1e3 * duration<double>(now - next_frame_start).count());
-				pts_origin = frame->pts;
-				start = next_frame_start = now;
-				timecode += MAX_FPS * 2 + 1;
+			if (play_as_fast_as_possible) {
+				video_frame->received_timestamp = now;
+				audio_frame->received_timestamp = now;
+				next_frame_start = now;
+			} else {
+				next_frame_start = compute_frame_start(frame->pts, pts_origin, video_timebase, start, rate);
+				if (first_frame && last_frame_was_connected) {
+					// If reconnect took more than one second, this is probably a live feed,
+					// and we should reset the resampler. (Or the rate is really, really low,
+					// in which case a reset on the first frame is fine anyway.)
+					if (duration<double>(next_frame_start - last_frame).count() >= 1.0) {
+						last_frame_was_connected = false;
+					}
+				}
+				video_frame->received_timestamp = next_frame_start;
+
+				// The easiest way to get all the rate conversions etc. right is to move the
+				// audio PTS into the video PTS timebase and go from there. (We'll get some
+				// rounding issues, but they should not be a big problem.)
+				int64_t audio_pts_as_video_pts = av_rescale_q(audio_pts, audio_timebase, video_timebase);
+				audio_frame->received_timestamp = compute_frame_start(audio_pts_as_video_pts, pts_origin, video_timebase, start, rate);
+
+				if (audio_frame->len != 0) {
+					// The received timestamps in Nageru are measured after we've just received the frame.
+					// However, pts (especially audio pts) is at the _beginning_ of the frame.
+					// If we have locked audio, the distinction doesn't really matter, as pts is
+					// on a relative scale and a fixed offset is fine. But if we don't, we will have
+					// a different number of samples each time, which will cause huge audio jitter
+					// and throw off the resampler.
+					//
+					// In a sense, we should have compensated by adding the frame and audio lengths
+					// to video_frame->received_timestamp and audio_frame->received_timestamp respectively,
+					// but that would mean extra waiting in sleep_until(). All we need is that they
+					// are correct relative to each other, though (and to the other frames we send),
+					// so just align the end of the audio frame, and we're fine.
+					size_t num_samples = (audio_frame->len * 8) / audio_format.bits_per_sample / audio_format.num_channels;
+					double offset = double(num_samples) / OUTPUT_FREQUENCY -
+						double(video_format.frame_rate_den) / video_format.frame_rate_nom;
+					audio_frame->received_timestamp += duration_cast<steady_clock::duration>(duration<double>(offset));
+				}
+
+				if (duration<double>(now - next_frame_start).count() >= 0.1) {
+					// If we don't have enough CPU to keep up, or if we have a live stream
+					// where the initial origin was somehow wrong, we could be behind indefinitely.
+					// In particular, this will give the audio resampler problems as it tries
+					// to speed up to reduce the delay, hitting the low end of the buffer every time.
+					fprintf(stderr, "%s: Playback %.0f ms behind, resetting time scale\n",
+						pathname.c_str(),
+						1e3 * duration<double>(now - next_frame_start).count());
+					pts_origin = frame->pts;
+					start = next_frame_start = now;
+					timecode += MAX_FPS * 2 + 1;
+				}
+			}
+			bool finished_wakeup;
+			if (play_as_fast_as_possible) {
+				finished_wakeup = !producer_thread_should_quit.should_quit();
+			} else {
+				finished_wakeup = producer_thread_should_quit.sleep_until(next_frame_start);
 			}
-			bool finished_wakeup = producer_thread_should_quit.sleep_until(next_frame_start);
 			if (finished_wakeup) {
 				if (audio_frame->len > 0) {
 					assert(audio_pts != -1);
@@ -627,6 +647,7 @@
 			start = compute_frame_start(last_pts, pts_origin, video_timebase, start, rate);
 			pts_origin = last_pts;
 			rate = cmd.new_rate;
+			play_as_fast_as_possible = (rate >= 10.0);
 			break;
 		}
 	}
diff -Nru nageru-1.8.2/nageru/ffmpeg_capture.h nageru-1.8.4/nageru/ffmpeg_capture.h
--- nageru-1.8.2/nageru/ffmpeg_capture.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/ffmpeg_capture.h	2019-03-11 23:40:21.000000000 +0100
@@ -254,6 +254,7 @@
 	bool running = false;
 	int card_index = -1;
 	double rate = 1.0;
+	bool play_as_fast_as_possible = false;  // Activated iff rate >= 10.0.
 	std::atomic<bool> should_interrupt{false};
 	bool last_frame_was_connected = true;
 
diff -Nru nageru-1.8.2/nageru/kaeru.cpp nageru-1.8.4/nageru/kaeru.cpp
--- nageru-1.8.2/nageru/kaeru.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/kaeru.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -104,17 +104,18 @@
 		size_t num_samples = audio_frame.len / (audio_format.bits_per_sample / 8);
 		vector<float> float_samples;
 		float_samples.resize(num_samples);
+
 		if (audio_format.bits_per_sample == 16) {
 			const int16_t *src = (const int16_t *)audio_frame.data;
 			float *dst = &float_samples[0];
 			for (size_t i = 0; i < num_samples; ++i) {
-				*dst++ = le16toh(*src++) * (1.0f / 32768.0f);
+				*dst++ = int16_t(le16toh(*src++)) * (1.0f / 32768.0f);
 			}
 		} else if (audio_format.bits_per_sample == 32) {
 			const int32_t *src = (const int32_t *)audio_frame.data;
 			float *dst = &float_samples[0];
 			for (size_t i = 0; i < num_samples; ++i) {
-				*dst++ = le32toh(*src++) * (1.0f / 2147483648.0f);
+				*dst++ = int32_t(le32toh(*src++)) * (1.0f / 2147483648.0f);
 			}
 		} else {
 			assert(false);
@@ -209,7 +210,7 @@
 	}
 	video.configure_card();
 	video.start_bm_capture();
-	video.change_rate(2.0);  // Be sure never to really fall behind, but also don't dump huge amounts of stuff onto x264.
+	video.change_rate(10.0);  // Play as fast as possible.
 
 	BasicStats basic_stats(/*verbose=*/false, /*use_opengl=*/false);
 	global_basic_stats = &basic_stats;
diff -Nru nageru-1.8.2/nageru/mainwindow.cpp nageru-1.8.4/nageru/mainwindow.cpp
--- nageru-1.8.2/nageru/mainwindow.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/mainwindow.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -253,30 +253,16 @@
 	qRegisterMetaType<Mixer::Output>("Mixer::Output");
 
 	// Hook up the prev/next buttons on the audio views.
-	auto prev_page = [this]{
-		if (global_audio_mixer->get_mapping_mode() == AudioMixer::MappingMode::MULTICHANNEL) {
-			ui->audio_views->setCurrentIndex((ui->audio_views->currentIndex() + 2) % 3);
-		} else {
-			ui->audio_views->setCurrentIndex(2 - ui->audio_views->currentIndex());  // Switch between 0 and 2.
-		}
-	};
-	auto next_page = [this]{
-		if (global_audio_mixer->get_mapping_mode() == AudioMixer::MappingMode::MULTICHANNEL) {
-			ui->audio_views->setCurrentIndex((ui->audio_views->currentIndex() + 1) % 3);
-		} else {
-			ui->audio_views->setCurrentIndex(2 - ui->audio_views->currentIndex());  // Switch between 0 and 2.
-		}
-	};
-	connect(ui->compact_prev_page, &QAbstractButton::clicked, prev_page);
-	connect(ui->compact_next_page, &QAbstractButton::clicked, next_page);
-	connect(ui->full_prev_page, &QAbstractButton::clicked, prev_page);
-	connect(ui->full_next_page, &QAbstractButton::clicked, next_page);
-	connect(ui->video_grid_prev_page, &QAbstractButton::clicked, prev_page);
-	connect(ui->video_grid_next_page, &QAbstractButton::clicked, next_page);
+	connect(ui->compact_prev_page, &QAbstractButton::clicked, this, &MainWindow::prev_page);
+	connect(ui->compact_next_page, &QAbstractButton::clicked, this, &MainWindow::next_page);
+	connect(ui->full_prev_page, &QAbstractButton::clicked, this, &MainWindow::prev_page);
+	connect(ui->full_next_page, &QAbstractButton::clicked, this, &MainWindow::next_page);
+	connect(ui->video_grid_prev_page, &QAbstractButton::clicked, this, &MainWindow::prev_page);
+	connect(ui->video_grid_next_page, &QAbstractButton::clicked, this, &MainWindow::next_page);
 
 	// And bind the same to PgUp/PgDown.
-	connect(new QShortcut(QKeySequence::MoveToNextPage, this), &QShortcut::activated, next_page);
-	connect(new QShortcut(QKeySequence::MoveToPreviousPage, this), &QShortcut::activated, prev_page);
+	connect(new QShortcut(QKeySequence::MoveToNextPage, this), &QShortcut::activated, this, &MainWindow::next_page);
+	connect(new QShortcut(QKeySequence::MoveToPreviousPage, this), &QShortcut::activated, this, &MainWindow::prev_page);
 
 	// When the audio view changes, move the previews.
 	connect(ui->audio_views, &QStackedWidget::currentChanged, bind(&MainWindow::audio_view_changed, this, _1));
@@ -295,7 +281,7 @@
 		if (!load_midi_mapping_from_file(global_flags.midi_mapping_filename, &midi_mapping)) {
 			fprintf(stderr, "Couldn't load MIDI mapping '%s'; exiting.\n",
 				global_flags.midi_mapping_filename.c_str());
-			exit(1);
+			::exit(1);
 		}
 		midi_mapper.set_midi_mapping(midi_mapping);
 	}
@@ -306,6 +292,24 @@
 	}
 }
 
+void MainWindow::prev_page()
+{
+	if (global_audio_mixer->get_mapping_mode() == AudioMixer::MappingMode::MULTICHANNEL) {
+		ui->audio_views->setCurrentIndex((ui->audio_views->currentIndex() + 2) % 3);
+	} else {
+		ui->audio_views->setCurrentIndex(2 - ui->audio_views->currentIndex());  // Switch between 0 and 2.
+	}
+}
+
+void MainWindow::next_page()
+{
+	if (global_audio_mixer->get_mapping_mode() == AudioMixer::MappingMode::MULTICHANNEL) {
+		ui->audio_views->setCurrentIndex((ui->audio_views->currentIndex() + 1) % 3);
+	} else {
+		ui->audio_views->setCurrentIndex(2 - ui->audio_views->currentIndex());  // Switch between 0 and 2.
+	}
+}
+
 void MainWindow::resizeEvent(QResizeEvent* event)
 {
 	QMainWindow::resizeEvent(event);
@@ -1244,6 +1248,42 @@
 	}
 }
 
+void MainWindow::switch_video_channel(int channel_number)
+{
+	global_mixer->channel_clicked(channel_number);
+}
+
+void MainWindow::apply_transition(int transition_number)
+{
+	global_mixer->transition_clicked(transition_number);
+}
+
+void MainWindow::prev_audio_view()
+{
+	post_to_main_thread([this]{
+		prev_page();
+	});
+}
+
+void MainWindow::next_audio_view()
+{
+	post_to_main_thread([this]{
+		next_page();
+	});
+}
+
+void MainWindow::begin_new_segment()
+{
+	global_mixer->schedule_cut();
+}
+
+void MainWindow::exit()
+{
+	post_to_main_thread([this]{
+		close();
+	});
+}
+
 void MainWindow::highlight_locut(bool highlight)
 {
 	post_to_main_thread([this, highlight]{
diff -Nru nageru-1.8.2/nageru/mainwindow.h nageru-1.8.4/nageru/mainwindow.h
--- nageru-1.8.2/nageru/mainwindow.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/mainwindow.h	2019-03-11 23:40:21.000000000 +0100
@@ -96,6 +96,13 @@
 	void toggle_limiter() override;
 	void toggle_auto_makeup_gain() override;
 
+	void switch_video_channel(int channel_number) override;
+	void apply_transition(int transition_number) override;
+	void prev_audio_view() override;
+	void next_audio_view() override;
+	void begin_new_segment() override;
+	void exit() override;
+
 	void clear_all_highlights() override;
 
 	void highlight_locut(bool highlight) override;
@@ -133,6 +140,8 @@
 	void update_stereo_label(unsigned bus_index, int stereo_width_percent);
 	void update_eq_label(unsigned bus_index, EQBand band, float gain_db);
 	void setup_theme_menu();
+	void prev_page();
+	void next_page();
 
 	// Called from DiskSpaceEstimator.
 	void report_disk_space(off_t free_bytes, double estimated_seconds_left);
diff -Nru nageru-1.8.2/nageru/midi_mapper.cpp nageru-1.8.4/nageru/midi_mapper.cpp
--- nageru-1.8.2/nageru/midi_mapper.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/midi_mapper.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -179,6 +179,18 @@
 		bind(&ControllerReceiver::toggle_limiter, receiver));
 	match_button(note, MIDIMappingBusProto::kToggleAutoMakeupGainFieldNumber, MIDIMappingProto::kToggleAutoMakeupGainBankFieldNumber,
 		bind(&ControllerReceiver::toggle_auto_makeup_gain, receiver));
+	match_button(note, MIDIMappingBusProto::kSwitchVideoChannelFieldNumber, MIDIMappingProto::kSwitchVideoChannelBankFieldNumber,
+		bind(&ControllerReceiver::switch_video_channel, receiver, _1));
+	match_button(note, MIDIMappingBusProto::kApplyTransitionFieldNumber, MIDIMappingProto::kApplyTransitionBankFieldNumber,
+		bind(&ControllerReceiver::apply_transition, receiver, _1));
+	match_button(note, MIDIMappingBusProto::kPrevAudioViewFieldNumber, MIDIMappingProto::kPrevAudioViewBankFieldNumber,
+		bind(&ControllerReceiver::prev_audio_view, receiver));
+	match_button(note, MIDIMappingBusProto::kNextAudioViewFieldNumber, MIDIMappingProto::kNextAudioViewBankFieldNumber,
+		bind(&ControllerReceiver::prev_audio_view, receiver));
+	match_button(note, MIDIMappingBusProto::kBeginNewVideoSegmentFieldNumber, MIDIMappingProto::kBeginNewVideoSegmentBankFieldNumber,
+		bind(&ControllerReceiver::begin_new_segment, receiver));
+	match_button(note, MIDIMappingBusProto::kExitFieldNumber, MIDIMappingProto::kExitBankFieldNumber,
+		bind(&ControllerReceiver::exit, receiver));
 }
 
 void MIDIMapper::update_num_subscribers(unsigned num_subscribers)
diff -Nru nageru-1.8.2/nageru/midi_mapper.h nageru-1.8.4/nageru/midi_mapper.h
--- nageru-1.8.2/nageru/midi_mapper.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/midi_mapper.h	2019-03-11 23:40:21.000000000 +0100
@@ -47,6 +47,14 @@
 	virtual void toggle_limiter() = 0;
 	virtual void toggle_auto_makeup_gain() = 0;
 
+	// Non-audio events.
+	virtual void switch_video_channel(int channel_number) = 0;
+	virtual void apply_transition(int transition_number) = 0;
+	virtual void prev_audio_view() = 0;
+	virtual void next_audio_view() = 0;
+	virtual void begin_new_segment() = 0;
+	virtual void exit() = 0;
+
 	// Signals to highlight controls to mark them to the user
 	// as MIDI-controllable (or not).
 	virtual void clear_all_highlights() = 0;
diff -Nru nageru-1.8.2/nageru/midi_mapping_dialog.cpp nageru-1.8.4/nageru/midi_mapping_dialog.cpp
--- nageru-1.8.2/nageru/midi_mapping_dialog.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/midi_mapping_dialog.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -88,6 +88,18 @@
 	{ "Auto makeup gain is on",   MIDIMappingBusProto::kAutoMakeupGainIsOnFieldNumber, 0 },
 };
 
+vector<MIDIMappingDialog::Control> global_video = {
+	{ "Switch video channel",     MIDIMappingBusProto::kSwitchVideoChannelFieldNumber, MIDIMappingProto::kSwitchVideoChannelBankFieldNumber },
+	{ "Apply transition",         MIDIMappingBusProto::kApplyTransitionFieldNumber, MIDIMappingProto::kApplyTransitionBankFieldNumber },
+};
+
+vector<MIDIMappingDialog::Control> main_ui = {
+	{ "Previous audio view",       MIDIMappingBusProto::kPrevAudioViewFieldNumber, MIDIMappingProto::kPrevAudioViewBankFieldNumber },
+	{ "Next audio view",           MIDIMappingBusProto::kNextAudioViewFieldNumber, MIDIMappingProto::kNextAudioViewBankFieldNumber },
+	{ "Begin new video segment",   MIDIMappingBusProto::kBeginNewVideoSegmentFieldNumber, MIDIMappingProto::kBeginNewVideoSegmentBankFieldNumber },
+	{ "Exit Nageru",               MIDIMappingBusProto::kExitFieldNumber, MIDIMappingProto::kExitBankFieldNumber },
+};
+
 namespace {
 
 int get_bank(const MIDIMappingProto &mapping_proto, int bank_field_number, int default_value)
@@ -162,8 +174,10 @@
 	add_controls("Per-bus controllers", ControlType::CONTROLLER, SpinnerGroup::PER_BUS_CONTROLLERS, mapping_proto, per_bus_controllers);
 	add_controls("Per-bus buttons", ControlType::BUTTON, SpinnerGroup::PER_BUS_BUTTONS, mapping_proto, per_bus_buttons);
 	add_controls("Per-bus lights", ControlType::LIGHT, SpinnerGroup::PER_BUS_LIGHTS, mapping_proto, per_bus_lights);
+	add_controls("Video mixing", ControlType::BUTTON, SpinnerGroup::GLOBAL_BUTTONS, mapping_proto, global_video);
 	add_controls("Global controllers", ControlType::CONTROLLER, SpinnerGroup::GLOBAL_CONTROLLERS, mapping_proto, global_controllers);
 	add_controls("Global buttons", ControlType::BUTTON, SpinnerGroup::GLOBAL_BUTTONS, mapping_proto, global_buttons);
+	add_controls("Main UI", ControlType::BUTTON, SpinnerGroup::GLOBAL_BUTTONS, mapping_proto, main_ui);
 	add_controls("Global lights", ControlType::LIGHT, SpinnerGroup::GLOBAL_LIGHTS, mapping_proto, global_lights);
 	fill_controls_from_mapping(mapping_proto);
 
diff -Nru nageru-1.8.2/nageru/midi_mapping_dialog.h nageru-1.8.4/nageru/midi_mapping_dialog.h
--- nageru-1.8.2/nageru/midi_mapping_dialog.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/midi_mapping_dialog.h	2019-03-11 23:40:21.000000000 +0100
@@ -85,6 +85,13 @@
 	void highlight_toggle_limiter(bool highlight) override {}
 	void highlight_toggle_auto_makeup_gain(bool highlight) override {}
 
+	void switch_video_channel(int channel_number) override {}
+	void apply_transition(int transition_number) override {}
+	void prev_audio_view() override {}
+	void next_audio_view() override {}
+	void begin_new_segment() override {}
+	void exit() override {}
+
 	// Raw events; used for the editor dialog only.
 	void controller_changed(unsigned controller) override;
 	void note_on(unsigned note) override;
diff -Nru nageru-1.8.2/nageru/mixer.cpp nageru-1.8.4/nageru/mixer.cpp
--- nageru-1.8.2/nageru/mixer.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/mixer.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -779,12 +779,12 @@
 
 		bool success;
 		do {
-			success = audio_mixer->add_silence(device, silence_samples, dropped_frames, frame_length);
+			success = audio_mixer->add_silence(device, silence_samples, dropped_frames);
 		} while (!success);
 	}
 
 	if (num_samples > 0) {
-		audio_mixer->add_audio(device, audio_frame.data + audio_offset, num_samples, audio_format, frame_length, audio_frame.received_timestamp);
+		audio_mixer->add_audio(device, audio_frame.data + audio_offset, num_samples, audio_format, audio_frame.received_timestamp);
 	}
 
 	// Done with the audio, so release it.
@@ -795,7 +795,7 @@
 	card->last_timecode = timecode;
 
 	PBOFrameAllocator::Userdata *userdata = (PBOFrameAllocator::Userdata *)video_frame.userdata;
-	if (card->type == CardType::FFMPEG_INPUT) {
+	if (card->type == CardType::FFMPEG_INPUT && userdata != nullptr) {
 		FFmpegCapture *ffmpeg_capture = static_cast<FFmpegCapture *>(card->capture.get());
 		userdata->has_last_subtitle = ffmpeg_capture->get_has_last_subtitle();
 		userdata->last_subtitle = ffmpeg_capture->get_last_subtitle();
@@ -824,7 +824,7 @@
 	if (video_frame.len - video_offset == 0 ||
 	    video_frame.len - video_offset != expected_length) {
 		if (video_frame.len != 0) {
-			printf("%s: Dropping video frame with wrong length (%ld; expected %ld)\n",
+			printf("%s: Dropping video frame with wrong length (%zu; expected %zu)\n",
 				spec_to_string(device).c_str(), video_frame.len - video_offset, expected_length);
 		}
 		if (video_frame.owner) {
@@ -1082,7 +1082,7 @@
 			}
 
 			// Only bother doing MJPEG encoding if there are any connected clients
-			// that want the stream.
+			// that want the stream. FIXME: We should also stop memcpy-ing if there are none!
 			if (httpd.get_num_connected_multicam_clients() > 0) {
 				auto stream_it = global_flags.card_to_mjpeg_stream_export.find(card_index);
 				if (stream_it != global_flags.card_to_mjpeg_stream_export.end()) {
diff -Nru nageru-1.8.2/nageru/mjpeg_encoder.cpp nageru-1.8.4/nageru/mjpeg_encoder.cpp
--- nageru-1.8.2/nageru/mjpeg_encoder.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/mjpeg_encoder.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -16,6 +16,7 @@
 #include "flags.h"
 #include "shared/httpd.h"
 #include "shared/memcpy_interleaved.h"
+#include "shared/metrics.h"
 #include "pbo_frame_allocator.h"
 #include "shared/timebase.h"
 #include "va_display_with_cleanup.h"
@@ -177,12 +178,26 @@
 		va_receiver_thread = thread(&MJPEGEncoder::va_receiver_thread_func, this);
 	}
 
+	global_metrics.add("mjpeg_frames", {{ "status", "dropped" }, { "reason", "zero_size" }}, &metric_mjpeg_frames_zero_size_dropped);
+	global_metrics.add("mjpeg_frames", {{ "status", "dropped" }, { "reason", "interlaced" }}, &metric_mjpeg_frames_interlaced_dropped);
+	global_metrics.add("mjpeg_frames", {{ "status", "dropped" }, { "reason", "unsupported_pixel_format" }}, &metric_mjpeg_frames_unsupported_pixel_format_dropped);
+	global_metrics.add("mjpeg_frames", {{ "status", "dropped" }, { "reason", "oversized" }}, &metric_mjpeg_frames_oversized_dropped);
+	global_metrics.add("mjpeg_frames", {{ "status", "dropped" }, { "reason", "overrun" }}, &metric_mjpeg_overrun_dropped);
+	global_metrics.add("mjpeg_frames", {{ "status", "submitted" }}, &metric_mjpeg_overrun_submitted);
+
 	running = true;
 }
 
 MJPEGEncoder::~MJPEGEncoder()
 {
 	av_free(avctx->pb->buffer);
+
+	global_metrics.remove("mjpeg_frames", {{ "status", "dropped" }, { "reason", "zero_size" }});
+	global_metrics.remove("mjpeg_frames", {{ "status", "dropped" }, { "reason", "interlaced" }});
+	global_metrics.remove("mjpeg_frames", {{ "status", "dropped" }, { "reason", "unsupported_pixel_format" }});
+	global_metrics.remove("mjpeg_frames", {{ "status", "dropped" }, { "reason", "oversized" }});
+	global_metrics.remove("mjpeg_frames", {{ "status", "dropped" }, { "reason", "overrun" }});
+	global_metrics.remove("mjpeg_frames", {{ "status", "submitted" }});
 }
 
 void MJPEGEncoder::stop()
@@ -193,6 +208,7 @@
 	running = false;
 	should_quit = true;
 	any_frames_to_be_encoded.notify_all();
+	any_frames_encoding.notify_all();
 	encoder_thread.join();
 	if (va_dpy != nullptr) {
 		va_receiver_thread.join();
@@ -247,27 +263,33 @@
 {
 	PBOFrameAllocator::Userdata *userdata = (PBOFrameAllocator::Userdata *)frame->userdata;
 	if (video_format.width == 0 || video_format.height == 0) {
+		++metric_mjpeg_frames_zero_size_dropped;
 		return;
 	}
 	if (video_format.interlaced) {
 		fprintf(stderr, "Card %u: Ignoring JPEG encoding for interlaced frame\n", card_index);
+		++metric_mjpeg_frames_interlaced_dropped;
 		return;
 	}
 	if (userdata->pixel_format != PixelFormat_8BitYCbCr ||
 	    !frame->interleaved) {
 		fprintf(stderr, "Card %u: Ignoring JPEG encoding for unsupported pixel format\n", card_index);
+		++metric_mjpeg_frames_unsupported_pixel_format_dropped;
 		return;
 	}
 	if (video_format.width > 4096 || video_format.height > 4096) {
 		fprintf(stderr, "Card %u: Ignoring JPEG encoding for oversized frame\n", card_index);
+		++metric_mjpeg_frames_oversized_dropped;
 		return;
 	}
 
 	lock_guard<mutex> lock(mu);
-	if (frames_to_be_encoded.size() + frames_encoding.size() > 10) {
+	if (frames_to_be_encoded.size() + frames_encoding.size() > 50) {
 		fprintf(stderr, "WARNING: MJPEG encoding doesn't keep up, discarding frame.\n");
+		++metric_mjpeg_overrun_dropped;
 		return;
 	}
+	++metric_mjpeg_overrun_submitted;
 	frames_to_be_encoded.push(QueuedFrame{ pts, card_index, frame, video_format, y_offset, cbcr_offset });
 	any_frames_to_be_encoded.notify_all();
 }
@@ -315,7 +337,8 @@
 	pkt.size = jpeg.size();
 	pkt.stream_index = card_index;
 	pkt.flags = AV_PKT_FLAG_KEY;
-	pkt.pts = pkt.dts = pts;
+	AVRational time_base = avctx->streams[pkt.stream_index]->time_base;
+	pkt.pts = pkt.dts = av_rescale_q(pts, AVRational{ 1, TIMEBASE }, time_base);
 
 	if (av_write_frame(avctx.get(), &pkt) < 0) {
 		fprintf(stderr, "av_write_frame() failed\n");
diff -Nru nageru-1.8.2/nageru/mjpeg_encoder.h nageru-1.8.4/nageru/mjpeg_encoder.h
--- nageru-1.8.2/nageru/mjpeg_encoder.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/mjpeg_encoder.h	2019-03-11 23:40:21.000000000 +0100
@@ -146,6 +146,13 @@
 	static std::unique_ptr<VADisplayWithCleanup> try_open_va(const std::string &va_display, std::string *error, VAConfigID *config_id);
 
 	uint8_t *tmp_y, *tmp_cbcr, *tmp_cb, *tmp_cr;  // Private to the encoder thread. Used by the libjpeg backend only.
+
+	std::atomic<int64_t> metric_mjpeg_frames_zero_size_dropped{0};
+	std::atomic<int64_t> metric_mjpeg_frames_interlaced_dropped{0};
+	std::atomic<int64_t> metric_mjpeg_frames_unsupported_pixel_format_dropped{0};
+	std::atomic<int64_t> metric_mjpeg_frames_oversized_dropped{0};
+	std::atomic<int64_t> metric_mjpeg_overrun_dropped{0};
+	std::atomic<int64_t> metric_mjpeg_overrun_submitted{0};
 };
 
 #endif  // !defined(_MJPEG_ENCODER_H)
diff -Nru nageru-1.8.2/nageru/nageru_midi_mapping.proto nageru-1.8.4/nageru/nageru_midi_mapping.proto
--- nageru-1.8.2/nageru/nageru_midi_mapping.proto	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/nageru_midi_mapping.proto	2019-03-11 23:40:21.000000000 +0100
@@ -40,6 +40,16 @@
 	optional MIDIButtonProto toggle_limiter = 20;
 	optional MIDIButtonProto toggle_auto_makeup_gain = 21;
 
+	// Video mixing.
+	optional MIDIButtonProto switch_video_channel = 38;
+	optional MIDIButtonProto apply_transition = 39;
+
+	// Main UI. Really global, but see the comment on lo-cut etc. above.
+	optional MIDIButtonProto prev_audio_view = 40;
+	optional MIDIButtonProto next_audio_view = 41;
+	optional MIDIButtonProto begin_new_video_segment = 42;
+	optional MIDIButtonProto exit = 43;
+
 	// These are also global (they belong to the master bus), and unlike
 	// the bank change commands, one would usually have only one of each,
 	// but there's no reason to limit them to one each, and the editor UI
@@ -94,6 +104,10 @@
 	optional int32 toggle_compressor_bank = 11;
 	optional int32 clear_peak_bank = 12;
 
+	// Bus (not non-audio) buttons.
+	optional int32 switch_video_channel_bank = 24;
+	optional int32 apply_transition_bank = 25;
+
 	// Global controller banks.
 	optional int32 locut_bank = 13;
 	optional int32 limiter_threshold_bank = 14;
@@ -103,5 +117,11 @@
 	optional int32 toggle_limiter_bank = 16;
 	optional int32 toggle_auto_makeup_gain_bank = 17;
 
+	// Global non-audio buttons.
+	optional int32 prev_audio_view_bank = 20;
+	optional int32 next_audio_view_bank = 21;
+	optional int32 begin_new_video_segment_bank = 22;
+	optional int32 exit_bank = 23;
+
 	repeated MIDIMappingBusProto bus_mapping = 18;
 }
diff -Nru nageru-1.8.2/nageru/resampling_queue.cpp nageru-1.8.4/nageru/resampling_queue.cpp
--- nageru-1.8.2/nageru/resampling_queue.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/resampling_queue.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -115,7 +115,7 @@
 			// so that we don't need a long period to stabilize at the beginning.
 			if (err < 0.0) {
 				int delay_samples_to_add = lrintf(-err);
-				for (ssize_t i = 0; i < delay_samples_to_add * num_channels; ++i) {
+				for (ssize_t i = 0; i < delay_samples_to_add * int(num_channels); ++i) {
 					buffer.push_front(0.0f);
 				}
 				total_consumed_samples -= delay_samples_to_add;  // Equivalent to increasing input_samples_received on a0 and a1.
@@ -143,7 +143,7 @@
 		// (we start ResamplingQueues also when we e.g. switch sound sources),
 		// but in general, a little bit of increased timing jitter is acceptable
 		// right after a setup change like this.
-		double loop_bandwidth_hz = (total_consumed_samples < 4 * freq_in) ? 0.2 : 0.02;
+		double loop_bandwidth_hz = (total_consumed_samples < 4 * int(freq_in)) ? 0.2 : 0.02;
 
 		// Set filters. The first filter much wider than the first one (20x as wide).
 		double w = (2.0 * M_PI) * loop_bandwidth_hz * num_samples / freq_out;
diff -Nru nageru-1.8.2/nageru/x264_encoder.cpp nageru-1.8.4/nageru/x264_encoder.cpp
--- nageru-1.8.2/nageru/x264_encoder.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/nageru/x264_encoder.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -112,7 +112,7 @@
 	{
 		lock_guard<mutex> lock(mu);
 		if (free_frames.empty()) {
-			fprintf(stderr, "WARNING: x264 queue full, dropping frame with pts %ld\n", pts);
+			fprintf(stderr, "WARNING: x264 queue full, dropping frame with pts %" PRId64 "\n", pts);
 			++metric_x264_dropped_frames;
 			return;
 		}
diff -Nru nageru-1.8.2/NEWS nageru-1.8.4/NEWS
--- nageru-1.8.2/NEWS	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/NEWS	2019-03-11 23:40:21.000000000 +0100
@@ -1,3 +1,25 @@
+Nageru and Futatabi 1.8.4, March 11th, 2019
+
+  - Various bugfixes, in particular for 32-bit platforms.
+
+
+Nageru and Futatabi 1.8.3, March 10th, 2019
+
+  - Allow controlling video mixing from MIDI events. Adapted from a patch
+    by Yann Dubreuil, from the BreizhCamp repository.
+
+  - Use ALSA hardware timestamps for input; gives more stable delay.
+    Patch by Yann Dubreuil, from the BreizhCamp repository.
+
+  - For FFmpeg inputs, add an option for playing as fast as possible
+    (set rate >= 10.0).
+
+  - In Futatabi, support queueing and playing clips with noe cue-out point.
+    This opens up for new and even faster UI workflows.
+
+  - Many bugfixes.
+
+
 Nageru and Futatabi 1.8.2, January 19th, 2019
 
   - Futatabi now supports MIDI controllers like Nageru, including an editor
diff -Nru nageru-1.8.2/README nageru-1.8.4/README
--- nageru-1.8.2/README	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/README	2019-03-11 23:40:21.000000000 +0100
@@ -1,8 +1,8 @@
 Nageru is a live video mixer, based around the standard M/E workflow.
-Futatabi is a multicamera slow motion video server (currently undocumented).
+Futatabi is a multicamera slow motion video server.
 
 
-Features:
+Nageru features:
 
  - High performance on modest hardware (720p60 with two input streams
    on my Thinkpad X240[1]); almost all pixel processing is done on the GPU.
@@ -72,8 +72,6 @@
 
  - LuaJIT, for driving the theme engine. You will need at least version 2.1.
 
- - SQLite, for storing Futatabi state.
-
  - libjpeg, for encoding MJPEG streams when VA-API JPEG support is not
    available.
 
@@ -95,6 +93,14 @@
    on the meson command line (substituting X with the real version as required).
 
 
+Futatabi also needs:
+
+ - A fast GPU with OpenGL 4.5 support (GTX 1080 or similar recommended for
+   best quality at HD resolutions, although 950 should work).
+
+ - SQLite, for storing state.
+
+
 If on Debian buster or something similar, you can install everything you need
 with:
 
@@ -118,11 +124,13 @@
 It is taken to be by Steinar H. Gunderson <sesse@google.com> (ie., my ex-work
 email), and under the same license as zita-resampler itself.
 
-Nageru uses Meson to build. For a default build, type
+Nageru and Futatabi use Meson to build. For a default build (building both),
+type
 
   meson obj && cd obj && ninja
 
-To start it, just hook up your equipment, and then type “./nageru”.
+To start Nageru, just hook up your equipment, and then type “./nageru”.
+For Futatabi documentation, please see https://nageru.sesse.net/doc/.
 
 It is strongly recommended to have the rights to run at real-time priority;
 it will make the USB3 threads do so, which will make them a lot more stable.
@@ -163,6 +171,9 @@
 to throw or cast. (I also later learned that it could mean to face defeat or
 give up, but that's not the intended meaning.)
 
+The name “Futatabi” comes from the Japanese adverb 再び (futatabi), which means
+“again” or “for the second time”.
+
 
 Nageru's home page is at https://nageru.sesse.net/, where you can also find
 contact information, full documentation and link to the latest version.
diff -Nru nageru-1.8.2/shared/memcpy_interleaved.cpp nageru-1.8.4/shared/memcpy_interleaved.cpp
--- nageru-1.8.2/shared/memcpy_interleaved.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/shared/memcpy_interleaved.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -1,7 +1,11 @@
+#if (defined(__i386__) || defined(__x86_64__)) && defined(__GNUC__)
+#define HAS_MULTIVERSIONING 1
+#endif
+
 #include <algorithm>
 #include <assert.h>
 #include <cstdint>
-#if __SSE2__
+#if HAS_MULTIVERSIONING
 #include <immintrin.h>
 #endif
 
@@ -20,42 +24,58 @@
 	}
 }
 
-#ifdef __SSE2__
+#if HAS_MULTIVERSIONING
 
-// Returns the number of bytes consumed.
-size_t memcpy_interleaved_fastpath(uint8_t *dest1, uint8_t *dest2, const uint8_t *src, size_t n)
+__attribute__((target("default")))
+size_t memcpy_interleaved_fastpath_core(uint8_t *dest1, uint8_t *dest2, const uint8_t *src, const uint8_t *limit);
+
+__attribute__((target("sse2")))
+size_t memcpy_interleaved_fastpath_core(uint8_t *dest1, uint8_t *dest2, const uint8_t *src, const uint8_t *limit);
+
+__attribute__((target("avx2")))
+size_t memcpy_interleaved_fastpath_core(uint8_t *dest1, uint8_t *dest2, const uint8_t *src, const uint8_t *limit);
+
+__attribute__((target("default")))
+size_t memcpy_interleaved_fastpath_core(uint8_t *dest1, uint8_t *dest2, const uint8_t *src, const uint8_t *limit)
 {
-	const uint8_t *limit = src + n;
-	size_t consumed = 0;
+	// No fast path possible unless we have SSE2 or higher.
+	return 0;
+}
 
-	// Align end to 32 bytes.
-	limit = (const uint8_t *)(intptr_t(limit) & ~31);
+__attribute__((target("sse2")))
+size_t memcpy_interleaved_fastpath_core(uint8_t *dest1, uint8_t *dest2, const uint8_t *src, const uint8_t *limit)
+{
+	size_t consumed = 0;
+	const __m128i * __restrict in = (const __m128i *)src;
+	__m128i * __restrict out1 = (__m128i *)dest1;
+	__m128i * __restrict out2 = (__m128i *)dest2;
 
-	if (src >= limit) {
-		return 0;
-	}
+	__m128i mask_lower_byte = _mm_set1_epi16(0x00ff);
+	while (in < (const __m128i *)limit) {
+		__m128i data1 = _mm_load_si128(in);
+		__m128i data2 = _mm_load_si128(in + 1);
+		__m128i data1_lo = _mm_and_si128(data1, mask_lower_byte);
+		__m128i data2_lo = _mm_and_si128(data2, mask_lower_byte);
+		__m128i data1_hi = _mm_srli_epi16(data1, 8);
+		__m128i data2_hi = _mm_srli_epi16(data2, 8);
+		__m128i lo = _mm_packus_epi16(data1_lo, data2_lo);
+		_mm_storeu_si128(out1, lo);
+		__m128i hi = _mm_packus_epi16(data1_hi, data2_hi);
+		_mm_storeu_si128(out2, hi);
 
-	// Process [0,31] bytes, such that start gets aligned to 32 bytes.
-	const uint8_t *aligned_src = (const uint8_t *)(intptr_t(src + 31) & ~31);
-	if (aligned_src != src) {
-		size_t n2 = aligned_src - src;
-		memcpy_interleaved_slow(dest1, dest2, src, n2);
-		dest1 += n2 / 2;
-		dest2 += n2 / 2;
-		if (n2 % 2) {
-			swap(dest1, dest2);
-		}
-		src = aligned_src;
-		consumed += n2;
+		in += 2;
+		++out1;
+		++out2;
+		consumed += 32;
 	}
 
-	// Make the length a multiple of 64.
-	if (((limit - src) % 64) != 0) {
-		limit -= 32;
-	}
-	assert(((limit - src) % 64) == 0);
+	return consumed;
+}
 
-#if __AVX2__
+__attribute__((target("avx2")))
+size_t memcpy_interleaved_fastpath_core(uint8_t *dest1, uint8_t *dest2, const uint8_t *src, const uint8_t *limit)
+{
+	size_t consumed = 0;
 	const __m256i *__restrict in = (const __m256i *)src;
 	__m256i *__restrict out1 = (__m256i *)dest1;
 	__m256i *__restrict out2 = (__m256i *)dest2;
@@ -85,39 +105,51 @@
 		++out2;
 		consumed += 64;
 	}
-#else
-	const __m128i * __restrict in = (const __m128i *)src;
-	__m128i * __restrict out1 = (__m128i *)dest1;
-	__m128i * __restrict out2 = (__m128i *)dest2;
 
-	__m128i mask_lower_byte = _mm_set1_epi16(0x00ff);
-	while (in < (const __m128i *)limit) {
-		__m128i data1 = _mm_load_si128(in);
-		__m128i data2 = _mm_load_si128(in + 1);
-		__m128i data1_lo = _mm_and_si128(data1, mask_lower_byte);
-		__m128i data2_lo = _mm_and_si128(data2, mask_lower_byte);
-		__m128i data1_hi = _mm_srli_epi16(data1, 8);
-		__m128i data2_hi = _mm_srli_epi16(data2, 8);
-		__m128i lo = _mm_packus_epi16(data1_lo, data2_lo);
-		_mm_storeu_si128(out1, lo);
-		__m128i hi = _mm_packus_epi16(data1_hi, data2_hi);
-		_mm_storeu_si128(out2, hi);
+	return consumed;
+}
 
-		in += 2;
-		++out1;
-		++out2;
-		consumed += 32;
+// Returns the number of bytes consumed.
+size_t memcpy_interleaved_fastpath(uint8_t *dest1, uint8_t *dest2, const uint8_t *src, size_t n)
+{
+	const uint8_t *limit = src + n;
+	size_t consumed = 0;
+
+	// Align end to 32 bytes.
+	limit = (const uint8_t *)(intptr_t(limit) & ~31);
+
+	if (src >= limit) {
+		return 0;
 	}
-#endif
 
-	return consumed;
+	// Process [0,31] bytes, such that start gets aligned to 32 bytes.
+	const uint8_t *aligned_src = (const uint8_t *)(intptr_t(src + 31) & ~31);
+	if (aligned_src != src) {
+		size_t n2 = aligned_src - src;
+		memcpy_interleaved_slow(dest1, dest2, src, n2);
+		dest1 += n2 / 2;
+		dest2 += n2 / 2;
+		if (n2 % 2) {
+			swap(dest1, dest2);
+		}
+		src = aligned_src;
+		consumed += n2;
+	}
+
+	// Make the length a multiple of 64.
+	if (((limit - src) % 64) != 0) {
+		limit -= 32;
+	}
+	assert(((limit - src) % 64) == 0);
+
+	return consumed + memcpy_interleaved_fastpath_core(dest1, dest2, src, limit);
 }
 
-#endif  // defined(__SSE2__)
+#endif  // defined(HAS_MULTIVERSIONING)
 
 void memcpy_interleaved(uint8_t *dest1, uint8_t *dest2, const uint8_t *src, size_t n)
 {
-#ifdef __SSE2__
+#if HAS_MULTIVERSIONING
 	size_t consumed = memcpy_interleaved_fastpath(dest1, dest2, src, n);
 	src += consumed;
 	dest1 += consumed / 2;
@@ -126,11 +158,9 @@
 		swap(dest1, dest2);
 	}
 	n -= consumed;
+#endif
 
 	if (n > 0) {
 		memcpy_interleaved_slow(dest1, dest2, src, n);
 	}
-#else
-	memcpy_interleaved_slow(dest1, dest2, src, n);
-#endif
 }
diff -Nru nageru-1.8.2/shared/metrics.cpp nageru-1.8.4/shared/metrics.cpp
--- nageru-1.8.2/shared/metrics.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/shared/metrics.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -99,7 +99,7 @@
 
 	// If this is the last metric with this name, remove the type as well.
 	if (!((it != metrics.begin() && prev(it)->first.name == name) ||
-	      (it != metrics.end() && next(it)->first.name == name))) {
+	      (it != metrics.end() && next(it) != metrics.end() && next(it)->first.name == name))) {
 		types.erase(name);
 	}
 
diff -Nru nageru-1.8.2/shared/midi_device.cpp nageru-1.8.4/shared/midi_device.cpp
--- nageru-1.8.2/shared/midi_device.cpp	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/shared/midi_device.cpp	2019-03-11 23:40:21.000000000 +0100
@@ -73,7 +73,7 @@
 
 	// The sequencer object is now ready to be used from other threads.
 	{
-		lock_guard<mutex> lock(mu);
+		lock_guard<recursive_mutex> lock(mu);
 		alsa_seq = seq;
 		alsa_queue_id = queue_id;
 	}
@@ -97,7 +97,7 @@
 		while (snd_seq_query_next_port(seq, pinfo) >= 0) {
 			constexpr int mask = SND_SEQ_PORT_CAP_READ | SND_SEQ_PORT_CAP_SUBS_READ;
 			if ((snd_seq_port_info_get_capability(pinfo) & mask) == mask) {
-				lock_guard<mutex> lock(mu);
+				lock_guard<recursive_mutex> lock(mu);
 				subscribe_to_port_lock_held(seq, *snd_seq_port_info_get_addr(pinfo));
 			}
 		}
@@ -154,7 +154,7 @@
 		return;
 	}
 
-	lock_guard<mutex> lock(mu);
+	lock_guard<recursive_mutex> lock(mu);
 	switch (event->type) {
 	case SND_SEQ_EVENT_CONTROLLER: {
 		receiver->controller_received(event->data.control.param, event->data.control.value);
diff -Nru nageru-1.8.2/shared/midi_device.h nageru-1.8.4/shared/midi_device.h
--- nageru-1.8.2/shared/midi_device.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/shared/midi_device.h	2019-03-11 23:40:21.000000000 +0100
@@ -47,7 +47,7 @@
 
 	void update_lights(const std::map<LightKey, uint8_t> &active_lights)
 	{
-		std::lock_guard<std::mutex> lock(mu);
+		std::lock_guard<std::recursive_mutex> lock(mu);
 		update_lights_lock_held(active_lights);
 	}
 
@@ -60,7 +60,7 @@
 	std::atomic<bool> should_quit{false};
 	int should_quit_fd;
 
-	mutable std::mutex mu;
+	mutable std::recursive_mutex mu;  // Recursive because the MIDI receiver may update_lights() back while we are sending it stuff.
 	MIDIReceiver *receiver;  // Under <mu>.
 
 	std::thread midi_thread;
diff -Nru nageru-1.8.2/shared/timebase.h nageru-1.8.4/shared/timebase.h
--- nageru-1.8.2/shared/timebase.h	2019-01-19 22:57:27.000000000 +0100
+++ nageru-1.8.4/shared/timebase.h	2019-03-11 23:40:21.000000000 +0100
@@ -3,6 +3,8 @@
 
 #include <ratio>
 
+#include <stdint.h>
+
 // Common timebase that allows us to represent one frame exactly in all the
 // relevant frame rates:
 //
@@ -15,7 +17,7 @@
 // If we also wanted to represent one sample at 48000 Hz, we'd need
 // to go to 300000. Also supporting one sample at 44100 Hz would mean
 // going to 44100000; probably a bit excessive.
-#define TIMEBASE 120000
+constexpr int64_t TIMEBASE = 120000;
 
 // Some muxes, like MP4 (or at least avformat's implementation of it),
 // are not too fond of values above 2^31. At timebase 120000, that's only

Reply to: