-
Grouping Focus Bracketed Photos from an OM-1 Mark 2
The Problem
I have a folder of image (JPG and RAW) files of some focus bracketed shots from my OM-1 Mark 2. I need to find a way to group them… maybe there is exif data that’d help? Or maybe I can use bash to group by timestamp to find all shots within 1 second of the previous shot… if the gap is more than 1 second then it starts a new “series”.
That was my prompt for Claude.
Claude suggested I try this first, to see what’s available:
12345678# Install exiftool if you don't have it# brew install exiftool # macOS# sudo apt install libimage-exiftool-perl # Ubuntu/Debian# Check what EXIF data is availableexiftool P7120919.JPG | grep -i bracketexiftool P7120919.JPG | grep -i focusexiftool P7120919.JPG | grep -i sequenceI ran the commands and replied:
Ooh good news. There appears to be some metadata in both JPG and ORF files.
Focus Bracket Step Size : 5
White Balance Bracket : 0 0
Drive Mode : Focus Bracketing, Shot 7; Electronic shutterClaude then gave me a bash script. I made a few tweaks, re-prompted, made some tweaks, then it was done.
It turns this flat folder of files:
12345678910111213141516P7120857.JPGP7120857.ORFP7120858.JPGP7120858.ORFP7120859.JPGP7120859.ORFP7120860.JPGP7120860.ORFP7120861.JPGP7120861.ORFP7120863.JPGP7120863.ORFP7120864.JPGP7120864.ORFP7120865.JPGP7120865.ORFInto this:
123456789101112131415161718├── focus_bracket_series_1│ ├── P7120857.JPG│ ├── P7120857.ORF│ ├── P7120858.JPG│ ├── P7120858.ORF│ ├── P7120859.JPG│ ├── P7120859.ORF│ ├── P7120860.JPG│ ├── P7120860.ORF│ ├── P7120861.JPG│ └── P7120861.ORF├── focus_bracket_series_2│ ├── P7120863.JPG│ ├── P7120863.ORF│ ├── P7120864.JPG│ ├── P7120864.ORF│ ├── P7120865.JPG│ ├── P7120865.ORFImplementing
- Write this script somewhere (e.g., ~/scripts/om1-focus-bracket-grouper.sh )
- Make it executable ( chmod +x om1-focus-bracket-grouper.sh )
- Go into the flat folder (e.g., cd ~/Pictures/MacroShots )
- Run it ( ~/scripts/om1-focus-bracket-grouper.sh )
- It will show you what the results will be and then prompt you to hit Y to move the files or N to abort.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697#!/bin/bash# Analyze focus bracket sequences (dry run)analyze_focus_brackets() {echo "=== Focus Bracket Analysis ==="echolocal series_num=1local prev_shot_num=999local series_files=()# Sort by filename (chronological order) instead of modification timefor file in $(ls P*.ORF P*.JPG 2>/dev/null | sort); do[[ -f "$file" ]] || continue# Get both drive mode and timestamp for verificationdrive_mode=$(exiftool -DriveMode -T "$file" 2>/dev/null)timestamp=$(exiftool -DateTimeOriginal -d "%Y-%m-%d %H:%M:%S" -T "$file" 2>/dev/null)echo "Processing: $file - Drive Mode: '$drive_mode'"if [[ "$drive_mode" =~ "Focus Bracketing, Shot "([0-9]+) ]]; thenshot_num="${BASH_REMATCH[1]}"echo " Found shot number: $shot_num"# If shot number reset (went backwards), we found a new seriesif [[ $shot_num -lt $prev_shot_num ]]; thenif [[ ${#series_files[@]} -gt 0 ]]; thenecho "Series $((series_num-1)): ${#series_files[@]} shots"printf " %s\n" "${series_files[@]}"echofiseries_files=()echo "--- Starting Series $series_num ---"((series_num++))fiseries_files+=("$file (Shot $shot_num, $timestamp)")prev_shot_num=$shot_numelseecho " Not a focus bracket shot, skipping"fidone# Print final seriesif [[ ${#series_files[@]} -gt 0 ]]; thenecho "Series $((series_num-1)): ${#series_files[@]} shots"printf " %s\n" "${series_files[@]}"fiechoecho "Total series found: $((series_num-1))"}# Function to actually move files after confirmationmove_focus_brackets() {echo "Moving files to series folders..."local series_num=1local prev_shot_num=999for file in $(ls P*.ORF P*.JPG 2>/dev/null | sort); do[[ -f "$file" ]] || continuedrive_mode=$(exiftool -DriveMode -T "$file" 2>/dev/null)if [[ "$drive_mode" =~ "Focus Bracketing, Shot "([0-9]+) ]]; thenshot_num="${BASH_REMATCH[1]}"if [[ $shot_num -lt $prev_shot_num ]]; then((series_num++))fiseries_dir="focus_bracket_series_$((series_num-1))"mkdir -p "$series_dir"echo "Moving $file to $series_dir"mv "$file" "$series_dir/"prev_shot_num=$shot_numfidone}# Run analysis firstanalyze_focus_bracketsechoread -p "Proceed with moving files? (y/N): " -n 1 -rechoif [[ $REPLY =~ ^[Yy]$ ]]; thenmove_focus_bracketsecho "Done!"elseecho "Aborted. Files not moved."fi -
Handling Daylight Saving Time in Cron Jobs
- Our business is located in Maine (same timezone as New York; we set clocks back one hour in the Fall and ahead one hour in the Spring)
- Server A uses ET (automatically adjusts to UTC-4 in the summer, and UTC-5 in the winter)
- Server B uses UTC (not affected by Daylight Saving Time) – we cannot control the timezone of this server
We want to ensure both servers are always referencing, figuratively, the same clock on the wall in Maine.
Here’s what we can do to make sure the timing of cron jobs on Server B (UTC) match the changing times of Server A (ET).
12345# Every day at 11:55pm EST (due to conditional, commands only run during Standard Time (fall/winter)55 4 * * * [ `TZ=America/New_York date +\%Z` = EST ] && php artisan scrub-db >/dev/null 2>&1# Every day at 11:55pm EDT (due to conditional, commands only run during Daylight Saving Time (spring/summer)55 3 * * * [ `TZ=America/New_York date +\%Z` = EDT ] && php artisan scrub-db >/dev/null 2>&1Only one of the php artisan scrub-db commands will execute, depending on the time of year.
-
Automatically Put Files Into a YYYY/MM Directory Structure
123456789101112131415161718192021222324252627#!/usr/bin/env bash## This script will find .sql and .gz files in the current## directory and move them into YYYY/MM directories (created## automatically).BASE_DIR=$(pwd)## Find all .sql and .gz files in the current directoryfind "$BASE_DIR" -maxdepth 1 -type f -name "*.sql" -o -name "*.gz" |while IFS= read -r file; do## Get the file's modification yearyear=$(date -d "$(stat -c %y $file)" +%Y)## Get the file's modification monthmonth=$(date -d "$(stat -c %y $file)" +%m)## Create the YYYY/MM directories if they don't exist. The -p flag## makes 'mkdir' create the parent directories as needed so## you don't need to create $year explicitly.[[ ! -d "$BASE_DIR/$year/$month" ]] && mkdir -p "$BASE_DIR/$year/$month";## Move the filemv "$file" "$BASE_DIR/$year/$month"echo "$file" "$BASE_DIR/$year/$month"done -
Find and Open (in vim) Multiple Files
This is a quick set of examples for finding and opening multiple files in Vim.
1234567891011121314# Open all found files in vimdifffind . -name '.lando.yml' -exec vimdiff {} +# Open all found files, one-by-onefind . -name '.lando.yml' -exec vim {} \;# Open all found files in tabsfind . -name '.lando.yml' -exec vim -p {} +# Open all found files in vertical splitsfind . -name '.lando.yml' -exec vim -O {} +# Open all found files in horizontal splitsfind . -name '.lando.yml' -exec vim -o {} + -
Monitoring a Drupal 8 Migration Executed with Drush
Update 2 is proving to be the most useful method.
Update 1
Somehow I missed the documentation regarding the --feedback argument for the drush mi command. It’s similar to the methods below in that it helps you see that the migration is working, though you don’t see the totals. You can ask for feedback in seconds or items.
Update 2:
Another method that is a bit dirty but very useful is to add a single line to core/modules/migrate/src/MigrateExecutable.php.
Adding the highlighted line to \Drupal\migrate\MigrateExecutable::import will let you see exactly where the migration is (which row is being processed).
If this is something you want permanently, consider posting an issue on d.o to request more verbose logging. This is my hackey solution for my immediate needs.
123456789$destination = $this->migration->getDestinationPlugin();while ($source->valid()) {$row = $source->current();$this->sourceIdValues = $row->getSourceIdValues();error_log(date('c') . ' Processing row ' . $this->counter . ' of ' . $this->migration->id() . ': ' . json_encode($this->sourceIdValues));try {$this->processRow($row);$save = TRUE;The output (in your logs, and in the drush output) will be something like this (institution_node is my migration, inst_guid is the only id):
As of Thursday, August 16, 2018 the counter() method is removed. So, you have to drop a counter variable in place like this:
This method of monitoring execution is surprisingly effective as it’s fast and you don’t have to lift a finger for it to happen each time you run a migration.
Original Post
This isn’t exactly an ideal solution, but it will do in a pinch. I’m working on a migration of more than 46,000 nodes (in a single migration among many). I needed a way to monitor the progress but I wanted to do it as quickly as possible (no coding). Here’s what I came up with, which I run in a new iTerm window.
All we’re doing is asking how many records are in the “member_node” migration map every 5 seconds. If we started at 6350 items we know that by the end of 20 seconds we have created 35 records.
We could also query the target content type itself:
Here we can analyze the data the same way to see how many records the migration is creating.
I recognize that this is pretty crude, but it’s effective; you are able to see that it’s working, instead of just staring at an unchanging prompt and watching your CPU jump.
-
Recursively Finding and Operating on the Largest n Number of Files of a Particular Type
If you want to reverse-engineer this, you can work from left to right, adding one command at a time to see what each step does to the output.
1du -ah . | grep -v "/$" | sort -rh | grep "jpg" | head -25 | cut -f 2 | while read -r line ; do ls -alh "$line"; doneThis example just lists the files we found, which is pointless given that’s what we already had before introducing the cut command.
How about a practical example?
Here’s how you can resize the 25 largest jpg files using mogrify (part of ImageMagick) to reduce the quality to 80:
1du -ah . | grep -v "/$" | sort -rh | grep "jpg" | head -25 | cut -f 2 | while read -r line ; do mogrify -quality 80 "$line"; done;What if you wanted to only grab files that are larger than a specific size? One way to do it is with the find command. For example, find all files greater than 8M:
1find . -type f -iname "*.jpg" -size +8M -exec du -ah {} \; | sort -rh | etc... -
Delete Last Command from Bash History
If you’re like me, on occasion you accidentally (or sometimes purposefully) include a password in a command. Until recently I would execute history , find the offending command’s number, then do a history -d NUMBER . It is kind of a pain.
The following deletes the last item in your bash history:
1history -d $((HISTCMD-2)) && history -d $((HISTCMD-1))You can create an alias for this, type it manually (umm, no thanks), or, as is the case for me, create a Keyboard Maestro macro for it. Here’s what my macro looks like:
-
Customizable Date variables in Keyboard Maestro
UPDATE: Though the solution below works well, I do recommend following the first commenter’s advice and using the ICUDateTime text tokens instead, which allow you to use any ICU date format, without having to invoke a shell script.
Sometimes you need a date and/or time variable in your Keyboard Maestro macros.
The easiestOne way I’ve found to do this is via an “Execute Shell Script” action. You’ll just use the date command and format as desired. -
Determining Your Most Used Commands in Terminal
I’m always looking to automate things using Alfred, Keyboard Maestro, Text Expander, and Python. I was curious which terminal commands I use most often, so I did some experimenting. Basically I wanted to know how many times I’ve executed each unique command ( ssh myserverx or ssh myservery, not just ssh). I started by piping the output of history to sort (to group), then to uniq (to count), then back to sort (to sort by the number of occurrences).
1history | sort | uniq -c | sort -n
1cat ~/.bash_history | sort | uniq -c | sort -n -
“755”-style permissions with ‘ls’
After a quick Google search for “ls permissions octal” I found a very handy alias to put in my .bashrc. Not only is it handy to see the OCTAL value of the permissions for each file/dir, it will undoubtedly help you more quickly recognize/interpret the normal ls -al output.
1alias lso="ls -alG | awk '{k=0;for(i=0;i<=8;i++)k+=((substr(\$1,i+2,1)~/[rwx]/)*2^(8-i));if(k)printf(\" %0o \",k);print}'"
1234mycomp:~ adam$ lso755 drwxr-xr-x 15 adam staff 510 Dec 6 02:47 .vim644 -rw-r--r-- 1 adam staff 1136 Dec 18 16:55 .vim_mru_files600 -rw------- 1 adam staff 13665 Dec 18 16:56 .viminfo
1ls -l | awk '{k=0;for(i=0;i<=8;i++)k+=((substr($1,i+2,1)~/[rwx]/)*2^(8-i));if(k)printf("%0o ",k);print}'