<![CDATA[Dan Makovec's site]]>https://dan.makovec.netGatsbyJSThu, 16 Feb 2023 03:07:14 GMT<![CDATA[SSO between Google Apps (G Suite) and AWS Console]]>Help for anyone experiencing the dreaded Your request included an invalid SAML response. To logout, click here error when setting up SSO between Google Apps (G Suite) and AWS.

]]>
https://dan.makovec.netsso-between-google-apps-g-suite-and-aws-consolehttps://dan.makovec.netsso-between-google-apps-g-suite-and-aws-consoleThu, 27 Aug 2020 09:00:00 GMT<p>So I wanted to play with SAML SSO using my Google Apps (G Suite) service as my IDP and my AWS account as the client. I diligently followed the directions at <a href="https://support.google.com/a/answer/6194963?hl=en" title="G Suite Admin help on SSO with AWS">Google</a> to do so, but kept getting this error from AWS whenever I attempted to sign in: </p> <p><code>Your request included an invalid SAML response. To logout, click here</code></p> <p>I couldn't understand what the hell was going wrong, as I'd base64-decoded my SAML responses and saw everything I was supposed to have in there, until I finally came across <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_saml.html#troubleshoot_saml_invalid-response" title="Troubleshooting page for SAML errors">this AWS troubleshooting page</a>. </p> <p>It finally made sense: </p> <blockquote> <p>The attribute must contain one or more AttributeValue elements, each containing a comma-separated pair of strings</p> </blockquote> <p>Aha! Back to the Google console, I edited my custom user attributes for my Amazon role. Originally I'd had the ARN of the role I had mapped in there, like so:</p> <img alt="Broken AWS role attribute in Google Apps" title="Broken AWS role attribute in Google Apps" src="//images.ctfassets.net/xqmfo4199nci/afj3LwGcZ8CYWO2OBdUSF/1d481550f82eb42b3737a7ddd90e192d/broken-role.png" style="max-width:100%" /> <p>I had to concatenate it onto the ARN of the SAML provider I'd set up for Google, comma-separated like so:</p> <p><img alt="Google role ARN concatenated onto SAML provider ARN" title="Google role ARN concatenated onto SAML provider ARN" src="//images.ctfassets.net/xqmfo4199nci/whAsutqMiTmPGWGVRYoNA/a2d0a0566bc3a54c2ab7c45506220e89/fixed-role-google.png" style="max-width:100%" /></p> <p>OK, so start a new browser session, log into my Google Apps (GSuite) account, and click on the AWS app, then:</p> <p><img src="//images.ctfassets.net/xqmfo4199nci/6E8rp3djuxTPGhlNR5HbNr/6c84518b0b24164ac8c8f4e872033886/working-aws-console.png" alt="A working console!" title="A working console!" style="max-width:100%;" /></p> <p>Hallelujah, all working! The top right now shows the role name (in this case, I called it "Google", and my email address). I'm in, baby!</p>
<![CDATA[Rapidly switch AWS CLI profiles]]>This simple bash/zsh function makes it easy to switch between AWS CLI user profiles in your terminal sessions.

]]>
https://dan.makovec.netrapidly-switch-aws-cli-profileshttps://dan.makovec.netrapidly-switch-aws-cli-profilesThu, 02 Jul 2020 14:00:00 GMT<p>I often find myself using the AWS CLI with multiple accounts and IAM users, and find switching between them using the --profile option tedious.</p> <p>Here's a cute little shell function to make handling profiles easier, by setting the appropriate environment variable for all subsequent AWS CLI calls in the current terminal session.</p> <p>When used with a profile name, it simply sets the AWS_PROFILE environment variable to the profile or your choice. It also allows you to quickly view and check the current profile's user details, in case your profile name isn't enough information to tell you what you need to know.</p> <h1>The code</h1> <pre><code>function awsuser() { if [ "$1" = "" ]; then AWS_ACCOUNT_ALIAS=$(aws iam list-account-aliases --query "AccountAliases[0]" --output text) USER_DETAILS=$(aws iam get-user --output json) AWS_USER=$(echo $USER_DETAILS | jq -r .User.UserName) AWS_ACCOUNT_ID=$(echo ${USER_DETAILS} | jq -r .User.Arn | sed -e 's/.*:://g' -e 's/:.*//g') echo "(${AWS_PROFILE:-default}): ${AWS_ACCOUNT_ID}:${AWS_ACCOUNT_ALIAS} -> ${AWS_USER}" elif [ "-l" = "$1" ]; then AWS_SHARED_CREDENTIALS_FILE=${AWS_SHARED_CREDENTIALS_FILE:-${HOME}/.aws/credentials} grep "\[" ${AWS_SHARED_CREDENTIALS_FILE} else export AWS_PROFILE=$1 fi } </code></pre> <h1>Using it</h1> <p>So let's say you've got <code>~/.aws/credentials</code> configured as follows:</p> <pre><code>[devaccount-admin] aws_access_key_id = ...... aws_secret_access_key = ...... [devaccount-developer] aws_access_key_id = ...... aws_secret_access_key = ...... [prodaccount-admin] aws_access_key_id = ...... aws_secret_access_key = ...... [prodaccount-user] aws_access_key_id = ...... aws_secret_access_key = ...... </code></pre> <p>You can get a quick list of all your profiles:</p> <pre><code>$ awsuser -l [devaccount-admin] [devaccount-developer] [prodaccount-admin] [prodaccount-user] </code></pre> <p>Switch to a specific user profile:</p> <pre><code>$ awsuser devaccount-admin </code></pre> <p>Then run with no args to see the profile, account ID and alias, and user ID you're currently running with:</p> <pre><code>$ awsuser (devaccount-admin): 123456789012:dmakovecdevaccount -> admin $ aws iam get-user { "User": { "Path": "/", "UserName": "admin", "UserId": "......", "Arn": "arn:aws:iam::123456789012:user/admin", "CreateDate": "2020-06-01T07:18:28+00:00", "PasswordLastUsed": "2020-07-03T05:14:48+00:00" } } </code></pre>
<![CDATA[Going Serverless with Gatsby, Netlify and Contentful]]>I'm well beyond able to use the AWS free tier, so during my down time I went through porting my site from a traditional Wordpress setup to full Serverless.

]]>
https://dan.makovec.netgoing-serverless-with-gatsby-netlify-and-contentfulhttps://dan.makovec.netgoing-serverless-with-gatsby-netlify-and-contentfulFri, 17 Apr 2020 06:22:00 GMT<p>So this is my first post on my new "serverless" site. The old one did me well for 5 years, but the world has moved on since Wordpress and I wanted something that didn't need patching and didn't have ongoing maintenance costs. So, during a bit of downtime from work I decided it was time to take a look at the site and see if it was worth updating it.</p> <p>I've been using AWS pretty much since it was just SQS and nothing but a bunch of APIs. I worked with it as various new tools like S3 and EC2 got rolled out, and have witnessed the revolution in virtualisation that it kicked off.</p> <p>While I've been working with it non-stop ever since, my personal website was running on a lonely little neglected EC2 instance running Wordpress for years. I'd patched the software every now and then, and the auto-updates for the OS were applied whenever possible. Every now and then though, the instance would need to reboot, or for whatever reason Apache or MySQL would crap out. </p> <p>Frankly I couldn't be buggered figuring out what was going wrong and fixing it, as my day job is troubleshooting IT issues, and I really try to stay away from doing that I'm not getting paid. Plus, the server was costing me money. Only a few $ a month (it was just a t2.micro), but still I'd rather the money be in my bank account than Mr. Bezos's.</p> <p>With the amount that I update the site, running a full LAMP stack seems like overkill, and I'd heard of static site generation frameworks like <a href="https://www.gatsbyjs.org/">GatsbyJS</a> a while ago, so I set about rebuilding the site using this tech. I had a couple of wants:</p> <ul> <li>I want to be able to publish blog articles using WYSIWYG or close to it (so not e.g. editing Markdown on Github)</li> <li>I want complete control to make any template or structural changes to the site without needing a full tool chain on any computer (so editing on Github is acceptable here)</li> <li>I want to work in Typescript rather than plain ol yucky Javascript</li> <li>I want a contact form in case anybody wants to reach me via the site</li> <li>I want the site to run on HTTPS under my domain name</li> <li>I don't want to worry about patching and security</li> <li>I don't want to worry about the site going down</li> <li>I don't want pay for hosting</li> </ul> <p>So I discovered headless CMSs, which basically are hosted services that allow you to write content using their fancy tools and provide a REST API for your application (e.g. Gatsby) to pull the content down in a structured manner and manipulate it at will. </p> <p>I found <a href="https://contentful.com">Contentful</a> which has a free level for developers, which is perfect for me. I then discovered <a href="https://alligator.io/gatsbyjs/gatsby-contentful-netlify/">this article</a>, which gave me a good starting template for Gatsby and a jumping off point for integrating with Contentful content types. It also introduced me to <a href="https://www.netlify.com/">Netlify</a>, which provides a CI/CD service that I can leverage to build my site without the need to fire up my own container or EC2 instance.</p> <p>I set up my page structures and imported all of my old blog posts from WordPress into my space on Contentful. Then I changed the template component files into Typescript and cleaned up the types. Next, I pushed my site configuration up to a <a href="https://github.com/dmakovec/makovec.net">GitHub repo</a>, set up my Netlify webhooks and let fly.</p> <p>Netlify takes care of automatically running the Gatsby build process whenever I add content on Contentful or commit code changes to GitHub, and hosting the finished product. It even has built in LetsEncrypt support so with a little DNS configuration, I get the HTTPS support I need.</p> <p>Contentful does a great job at providing a decent markdown editor and place to store the sources of my articles. Arengu hosts a form that sends me an email on submit, and there's an Arengu Gatsby plugin which displays the form within the context of my site. </p>
<![CDATA[Codeception: Tell Chrome to automatically accept desktop notifications]]>Set up Chrome environments in Codeception to allow Web Notifications.

]]>
https://dan.makovec.netcodeception-tell-chrome-to-automatically-accept-desktop-notificationshttps://dan.makovec.netcodeception-tell-chrome-to-automatically-accept-desktop-notificationsThu, 01 Sep 2016 03:38:00 GMT<p>If you're building a site that makes use of the handy <a href="https://www.sitepoint.com/introduction-web-notifications-api/">Web Notifications API</a> and want to test it in Codeception, you might want to tell your browser to enable notifications for the site. I had to fiddle a bit before getting this right, so hopefully it'll help somebody. In your <code>xxx-acceptance.suite.yml</code>, here are a couple of flags:</p> <pre><code>class_name: AcceptanceTester modules: enabled: - Asserts - WebDriver: url: '/' browser: chrome restart: true capabilities: # Accept any JS alert boxes by default unexpectedAlertBehaviour: 'accept' chromeOptions: args: - 'enable-strict-powerful-feature-restrictions' - 'window-size=1440,900' - 'window-position=0,0' prefs: # Automatically accept desktop notifications 'profile.managed_default_content_settings.notifications': 1 </code></pre>
<![CDATA[LetsEncrypt on Amazon Linux]]>Setting up self-signed certificates when you don't want to pay extra for an ALB

]]>
https://dan.makovec.netletsencrypt-on-amazon-linuxhttps://dan.makovec.netletsencrypt-on-amazon-linuxMon, 29 Aug 2016 12:43:00 GMT<p>This one's pretty simple.</p> <p>I started with <a href="https://ivopetkov.com/b/let-s-encrypt-on-ec2">Ivo Petkov's excellent notes</a> and O-mkar's <a href="http://stackoverflow.com/questions/38170100/letsencrypt-importerror-no-module-named-interface-on-amazon-linux-while-renewin">question and self-answer</a> to get LetsEncrypt up on my EC2 instance, then added a cron job.</p> <p>TL;DR:</p> <pre><code>sudo bash yum install python27-devel git git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt unset PYTHON_INSTALL_LAYOUT /opt/letsencrypt/letsencrypt-auto --debug echo "rsa-key-size = 4096" &#x26;gt;&#x26;gt; /etc/letsencrypt/config.ini echo "email = email@example.com" &#x26;gt;&#x26;gt; /etc/letsencrypt/config.ini unset PYTHON_INSTALL_LAYOUT /opt/letsencrypt/letsencrypt-auto certonly --webroot -w /var/www/yourdomainroot -d yourdomain.com -d www.yourdomain.com --config /etc/letsencrypt/config.ini --agree-tos yum install mod24_ssl </code></pre> <p>Add the following to /etc/httpd/conf.d/vhost.conf:</p> <pre><code>&#x3C;VirtualHost *:443> ServerName yourdomain.com DocumentRoot "/var/www/yourdomainroot" &#x3C;Directory "/var/www/yourdomainroot"> AllowOverride All &#x3C;/Directory>SSLEngine on SSLCertificateFile /etc/letsencrypt/live/yourdomain.com/cert.pem SSLCertificateKeyFile /etc/letsencrypt/live/yourdomain.com/privkey.pem SSLCertificateChainFile /etc/letsencrypt/live/yourdomain.com/chain.pem SSLProtocol All -SSLv2 -SSLv3 SSLHonorCipherOrder on SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS" &#x3C;/VirtualHost> </code></pre> <p>And finally, a renewal cron job:</p> <pre><code>echo > /opt/letsencrypt/autorenew &#x3C;&#x3C;EOF #!/bin/bash unset PYTHON_INSTALL_LAYOUT /opt/letsencrypt/letsencrypt-auto renew --config /etc/letsencrypt/config.ini --agree-tos &#x26;amp;&#x26;amp; apachectl graceful EOF chmod a+x /opt/letsencrypt/autorenew </code></pre> <p>Then run <code>crontab -e</code> and add the following entry:</p> <pre><code>0 0 * * * /opt/letsencrypt/autorenew </code></pre> <p>For bonus marks, since you've probably got HTTP vhost for port 80 something like:</p> <pre><code>&#x3C;VirtualHost *:80> DocumentRoot "/var/www/yourdomainroot" ServerName yourdomain.com ServerAlias yourdomain.com &#x3C;Directory "/var/www/yourdomainroot"> AllowOverride All &#x3C;/Directory> # Other directives here &#x3C;/VirtualHost> </code></pre> <p>Simply add the following into your .htaccess to redirect everybody hitting your formerly insecure site to https:</p> <pre><code>RewriteEngine On RewriteCond %{HTTPS} off RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] </code></pre>
<![CDATA[Stop OSX Photos.app opening every time you plug your (Android or IOS) device into your Mac]]>Prevent annoying behavior on MacOS while developing mobile apps

]]>
https://dan.makovec.netstop-osx-photos-app-opening-every-time-you-plug-your-android-or-ios-devicehttps://dan.makovec.netstop-osx-photos-app-opening-every-time-you-plug-your-android-or-ios-deviceFri, 18 Mar 2016 00:44:00 GMT<p>This has been driving me mad for ages, every time I plugged a test device in to debug a Cordova app.  I eventually found <a href="http://apple.stackexchange.com/a/211212">this solution</a> on SE for El Capitan:</p> <pre><code>defaults -currentHost write com.apple.ImageCapture disableHotPlug -bool YES </code></pre> <p>Problem solved!</p>
<![CDATA[How to move Google Authenticator to a New Unrooted Android Phone]]>Moving Google Authenticator OTP settings between Android devices isn't easy. Here's how to do it by stealth.

]]>
https://dan.makovec.nethow-to-move-google-authenticator-to-a-new-unrooted-android-phonehttps://dan.makovec.nethow-to-move-google-authenticator-to-a-new-unrooted-android-phoneThu, 12 Jun 2014 02:19:00 GMT<p>I just picked up a new Galaxy S4, since they're going cheap now that the S5 is out.</p> <p>I've had my trusty S2 for a number of years, and it's been hacked around, rooted, had Cyanogenmod on it and everything else you do to a phone that's out of warranty.</p> <p>But now it's time to move all of my data to the new phone, I see the problem of moving Google Authenticator between devices still hasn't been made easy.</p> <p>For clarity, if you're just using Authenticator for Google itself, and you only want one device at a time to use it, then it is indeed pretty easy, and Google makes the process fairly straightforward. I however use Authenticator for 2FA on a number of services, and I have it installed on both my phone and tablet, so my use case is different.</p> <p>Note that if you can root your new device, it's far easier just to install Titanium Backup on your old and new device and move your Authenticator settings between the two. I however can't root my S4 yet because I don't want to void its warranty by tripping Knox.</p> <p>So here's what you need on your old phone. I'll assume you know what the following terms mean. Google or xda-developers will help you out if you don't:</p> <ul> <li>The phone must be rooted</li> <li>Developer options enabled</li> <li>USB debugging enabled</li> <li>Android Debug Bridge (adb) installed on your PC/Mac</li> <li>A USB connection between the computer and phone, with adb shell able to connect</li> </ul> <p>On your computer, you'll need sqlite3 installed. Mac users can install it using Homebrew (recipe: sqlite). Everybody else, you can figure it out.</p> <p>Now, grab your Authenticator database off of the old phone:</p> <pre><code>dan@Dans-MacBook-Pro ~ $ adb pull /data/data/com.google.android.apps.authenticator2/databases/databases </code></pre> <p>Note that if the above step fails with a permission denied error, it's because your phone is locked down (kinda like my new unrooted S4), so you're outta luck.</p> <p>Now open the database locally on your computer:</p> <pre><code>dan@Dans-MacBook-Pro ~ $ sqlite3 databases SQLite version 3.7.11 2012-03-20 11:35:50 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite&#x26;gt; .tables accounts android_metadata sqlite&#x26;gt; select * from accounts; 1|Google|aaaabbbbccccdddd|0|0|0|| 2|Dropbox|aaaabbbbccccdddd|0|0|0|| 3|Facebook|aaaabbbbccccdddd|0|0|0|Facebook|Facebook </code></pre> <p>You'll see the keys for each of your authenticator accounts shown in the list (shown above as aaaabbbbccccdddd).</p> <p>Now, grab your new phone, open Authenticator, and one at a time add each account, selecting the "Enter provided key" option.</p> <p>Once you're done with each account, compare the time code generated on the new device with your old one. If they don't match, you made a typo.</p> <p>Now of course, since in the past few years Google, Samsung et al have made it progressively harder to get to this information, it might be an idea to back up these codes somewhere safe so that next time you upgrade you've got something to refer to!</p> <p>I hope that helps someone.</p>
<![CDATA[Working with Github pull requests on other people's repos]]>Working with third party libraries, composer and github pull requests...

]]>
https://dan.makovec.networking-with-github-pull-requests-on-other-peoples-reposhttps://dan.makovec.networking-with-github-pull-requests-on-other-peoples-reposMon, 04 Feb 2013 14:00:00 GMT<p>I'm using Composer for a ZF2 project.  I just upgraded to Zend Framework 2.1 from 2.0.6, and found that Ben Youngblood's <a href="https://github.com/bjyoungblood/BjyProfiler">bjyoungblood/BjyProfiler</a> stopped working.  A quick query on Freenode's #zftalk channel (thanks Diemuzi) showed me that a fix  and subsequent pull request had been made, but the project's maintainer hadn't pulled it in yet.  So how can I make use of this fix?</p> <ol> <li>First step: fork the project on Github (<a href="https://github.com/dmakovec/BjyProfiler">dmakovec/BjyProfiler</a>)</li> <li>Next, clone it down to my computer.  I used the OSX GitHub GUI to do this, but "git clone" would do the trick</li></li> <li>Locate the pull request (Diemuzi showed it to me, but you'll find it in the github page if you're not so lucky)</li></li> <li> <p>Go into your clone and perform the pull:</p> <pre><code>cd ~/dev/BjyProfiler &#x26;&#x26; git pull https://github.com/internalsystemerror/BjyProfiler.git hotfix/issue-21 </code></pre> </li> <li>Push it back up - git push origin master</li> <li>Alter your project's composer.json file to add your project repo in.</li> </ol> <pre><code>...     "homepage": "http://www.example.com/", # BEGIN INSERT #     "repositories": [         {              "type": "vcs",             "url": "https://github.com/dmakovec/BjyProfiler"         }     ], # END INSERT #     "require": { ... </code></pre> <ol start="7"> <li>Blow away the project's vendor folder (<code>rm -rf vendor/bjyoungblood</code>) - if you don't do this, composer may not update your repo and you'll be stuck scratching your head as to WTF went wrong for an hour.</li> </ol> <p>What you're doing is telling composer to check the named repositories for any of the packages named in the "require" section before going off to packagist and doing the same. If it finds a match in the repo (in this case, the "BjyProfiler" repo will contain a clone of the "composer.json" file in the top level folder, advising composer that the repo is indeed "bjyoungblood/bjy-profiler"), then it will pull the package from that repo instead of Packagist. Then you just keep an eye on the project so that when the change is merged in, you can go back to the upstream project repo.</p>
<![CDATA[Upgrading NetGear Stora NAS drives Without Copying Data]]>This article shows how I managed to upgrade my Netgear Stora from 1TB RAID1 to 2TB without having to move the data off the device before changing drives.

]]>
https://dan.makovec.netupgrading-netgear-stora-nas-drives-without-copying-datahttps://dan.makovec.netupgrading-netgear-stora-nas-drives-without-copying-dataWed, 23 Jan 2013 02:55:00 GMT<p>A few years ago, I purchased a NetGear Stora NAS for my home.  It's a great device, proving useful for general file storage as well as Time Machine backups and iTunes sharing.  Thanks to the guys at <a href="http://www.openstora.com">OpenStora</a>, it has proven to be an easy machine to customize with additional functionality.  DLNA for media playback, a Transmission client for getting CentOS updates via BitTorrent, and even a <a href="http://www.openstora.com/phpBB3/viewtopic.php?f=1&#x26;t=904">CrashPlan node</a> for backing up over the net.</p> <p>I originally bought the machine 3 years ago with a single 1TB drive.  Once they became cheap enough I threw a second 1TB drive in for RAID1 security.  But over the past few months it started filling up, so I decided to invest in a couple of 2TB drives to get some more life out of it.</p> <p>The question I had was: how can I upgrade these drives without having to spend forever copying all the files off of it and back on over a network (given I didn't have any 1TB USB drives)?</p> <p>I eventually found a <a href="http://forum1.netgear.com/showthread.php?t=67599">thread here</a> which got me started.  Never having used most of the tools discussed before, I did a little reading around and did the following.  The whole process took about a day.  I found it so handy, and know a few friends with Storas who might benefit from knowing the process I followed.  If it helps you, let me know!  The usual disclaimers apply of course :)</p> <p>Here's the procedure:</p> <p>Shut down Stora</p> <ol> <li>Swap out right-hand side old drive for a new one.  Label the drive in case you need to put it back in.</li> <li>Boot up</li> <li>Log into web console and go to Preferences -> Disk Management</li> </ol> <p>The second drive is marked as unconfigured. Click relevant button to add drive to RAID1 array</p> <ol start="4"> <li>Wait for rebuild to complete. Status can be seen either on web console or via SSH, <code>cat /proc/mdstat</code> (it will take at least 2 hours)</li> <li>Once the rebuild is complete, repeat the process for the left-hand drive, i.e:</li> <li>Shut down Stora</li> <li>Swap out left old drive for a new one and boot up</li> <li>Log into web console and go to Preferences - > Disk Management</li> </ol> <p>This time the first drive will marked as unconfigured. Click relevant button to add the drive to RAID1 array</p> <ol start="9"> <li>Wait for rebuild to complete. Status can be seen either on web console or via SSH, <code>cat /proc/mdstat</code> (it will take at least 2 hours)</li> </ol> <p>At this point, both 2TB drives will have taken over from the old 1TB drives in serving RAID, but they will show only 1TB capacity.</p> <ol start="11"> <li>ssh into Stora and get <code>root</code>:</li> </ol> <pre><code>-bash-3.2$ sudo bash We trust you have received the usual lecture from the local System Administrator. It usually boils down to these three things: #1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility. Password: audit_log_user_command(): Connection refused bash-3.2# </code></pre> <ol start="12"> <li>Take a look at the disk space on the RAID. Check the row for <code>/home</code>:</li> </ol> <pre><code>bash-3.2# df -m Filesystem 1M-blocks Used Available Use% Mounted on rootfs 212 158 54 75% / ubi0:rootfs 212 158 54 75% / none 62 1 62 1% /dev nodev 62 1 62 1% /var/log nodev 62 1 62 1% /mnt/tmpfs nodev 62 0 62 0% /var/lib/php/session nodev 953799 631836 321963 67% /tmp nodev 62 1 62 1% /var/run nodev 62 1 62 1% /var/cache nodev 62 1 62 1% /var/lib/axentra_sync nodev 62 1 62 1% /var/lib/oe-admin/minions nodev 62 1 62 1% /var/lib/oe-admin/actions nodev 62 1 62 1% /var/lib/oe-update-checker nodev 62 1 62 1% /etc/blkid nodev 62 1 62 1% /var/lib/dbus nodev 62 1 62 1% /var/lib/dhclient nodev 62 1 62 1% /var/lock nodev 62 1 62 1% /var/spool nodev 62 1 62 1% /etc/dhclient-eth0.conf nodev 62 1 62 1% /etc/printcap nodev 62 1 62 1% /etc/resolv.conf /dev/md0 953799 631836 321963 67% /home /dev/md0 953799 631836 321963 67% /tmp /dev/md0 953799 631836 321963 67% /var/cache/mt-daapd </code></pre> <p>In this case, there is a 1TB RAID array that's 67% full.</p> <ol start="13"> <li>Now get an idea as to the structure of Stora's RAID configuration. In the example below, it's running RAID1, with the two drives.</li> </ol> <p><code>sda1</code> is the single partition on drive <code>sda</code> - the drive on the left of the machine, and <code>sdb1</code> is likewise for the one on the right.</p> <pre><code>bash-3.2# /sbin/mdadm -D /dev/md0 # sda1 is LHS, sdb1 is RHS /dev/md0: Version : 00.90.03 Creation Time : Sun Aug 15 13:26:52 2010 Raid Level : raid1 Array Size : 976562432 (931.32 GiB 1000.00 GB) Used Dev Size : 976562432 (931.32 GiB 1000.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jan 22 10:34:33 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : f1330325:7aaf8fc4:c524c7d5:22e065bd Events : 0.178622 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 </code></pre> <ol start="14"> <li>The next thing to do is tell the RAID manager that the right-hand side drive is faulty, so that it stops accessing it</li> </ol> <pre><code>bash-3.2# /sbin/mdadm --fail /dev/md0 /dev/sdb1 mdadm: set /dev/sdb1 faulty in /dev/md0 </code></pre> <p>The RHS drive now displays as faulty</p> <pre><code>bash-3.2# /sbin/mdadm -D /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Aug 15 13:26:52 2010 Raid Level : raid1 Array Size : 976562432 (931.32 GiB 1000.00 GB) Used Dev Size : 976562432 (931.32 GiB 1000.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jan 22 10:36:12 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : f1330325:7aaf8fc4:c524c7d5:22e065bd Events : 0.178628 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 0 0 1 removed 2 8 17 - faulty spare /dev/sdb1 </code></pre> <ol start="15"> <li>Now tell the system that the drive has been removed from the array (don't actually physically remove the drive - this is all done via software):</li> </ol> <pre><code>bash-3.2# /sbin/mdadm --remove /dev/md0 /dev/sdb1 mdadm: hot removed /dev/sdb1 </code></pre> <p>The disk is now marked as removed:</p> <pre><code>bash-3.2# /sbin/mdadm -D /dev/md0 # The RHS drive no longer shows as part of the array /dev/md0: Version : 00.90.03 Creation Time : Sun Aug 15 13:26:52 2010 Raid Level : raid1 Array Size : 976562432 (931.32 GiB 1000.00 GB) Used Dev Size : 976562432 (931.32 GiB 1000.00 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jan 22 10:36:30 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : f1330325:7aaf8fc4:c524c7d5:22e065bd Events : 0.178636 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 0 0 1 removed </code></pre> <ol start="16"> <li>Now it's time to run <code>fdisk</code> on <code>sdb</code>). This will remove the old 1TB partition and create a new 2TB partition. This doesn't delete any data, so provided the new partition has the same start point, the data will still be accessible when the partition table is written to reflecting the new partition.</li> </ol> <p>Run the following commands:</p> <ul> <li>p: Show the existing partition structure (here it shows a single 1TB partition of type "fd" (Linux raid) starting at cylinder 1</li> <li>d: Delete partition (as there's only 1 it doesn't prompt you for a number</li> <li>p: Show that the partition table is empty</li> <li>n: Create new partition</li> <li>Primary partition</li> <li>Partition #1</li> <li>Default cylinder first and last values to use entire disk. Use hex code "fd" to set partition as Linux RAID type</li> <li>p: Show that the partition is created as desired</li> <li>w: Write the new partition table to disk</li> </ul> <pre><code>bash-3.2# /sbin/fdisk /dev/sdb The number of cylinders for this disk is set to 243201. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): p Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 121577 976562500 fd Linux raid autodetect Command (m for help): d Selected partition 1 Command (m for help): p Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-243201, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-243201, default 243201): Using default value 243201 Command (m for help): p Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 243201 1953512001 83 Linux Command (m for help): t Selected partition 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): p Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 243201 1953512001 fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: If you have created or modified any DOS 6.x partitions, please see the fdisk manual page for additional information. Syncing disks. </code></pre> <ol start="17"> <li>Now, run fdisk again and use "p" to verify that the partition table was indeed written out correctly.</li> </ol> <pre><code>bash-3.2# /sbin/fdisk /dev/sdb The number of cylinders for this disk is set to 243201. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): p Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 243201 1953512001 fd Linux raid autodetect Command (m for help): q </code></pre> <ol start="18"> <li>Add the drive back into the array</li> </ol> <pre><code>bash-3.2# /sbin/mdadm --add /dev/md0 /dev/sdb1 mdadm: added /dev/sdb1 </code></pre> <ol start="19"> <li>Verify that the drive has started rebuilding again</li> </ol> <pre><code>bash-3.2# /sbin/mdadm -D /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Aug 15 13:26:52 2010 Raid Level : raid1 Array Size : 976562432 (931.32 GiB 1000.00 GB) Used Dev Size : 976562432 (931.32 GiB 1000.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jan 22 10:48:02 2013 State : clean, degraded, recovering Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Rebuild Status : 0% complete UUID : f1330325:7aaf8fc4:c524c7d5:22e065bd Events : 0.178734 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 2 8 17 1 spare rebuilding /dev/sdb1 bash-3.2# cat /proc/mdstat # Get your estimate as to when the rebuild will be finished Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] md0 : active raid1 sdb1[2] sda1[0] 976562432 blocks [2/1] [U_] [&#x26;gt;....................] recovery = 2.2% (21843520/976562432) finish=128.4min speed=123842K/sec </code></pre> <p>This will take a while (over 2 hours in the example shown above). You can run the above command or simply <code>cat /proc/mdstat</code> to check on progress. When it looks finished, verify that both disks are active again:</p> <pre><code>bash-3.2# /sbin/mdadm -D /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Aug 15 13:26:52 2010 Raid Level : raid1 Array Size : 976562432 (931.32 GiB 1000.00 GB) Used Dev Size : 976562432 (931.32 GiB 1000.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jan 22 13:33:49 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : f1330325:7aaf8fc4:c524c7d5:22e065bd Events : 0.181086 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 </code></pre> <ol start="20"> <li>Now it's time to set up the left hand drive (<code>sda</code>). We just repeat the same procedure as above with <code>sdb</code>:</li> </ol> <pre><code>bash-3.2# /sbin/mdadm --fail /dev/md0 /dev/sda1 mdadm: set /dev/sda1 faulty in /dev/md0 bash-3.2# /sbin/mdadm -D /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Aug 15 13:26:52 2010 Raid Level : raid1 Array Size : 976562432 (931.32 GiB 1000.00 GB) Used Dev Size : 976562432 (931.32 GiB 1000.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jan 22 13:35:09 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : f1330325:7aaf8fc4:c524c7d5:22e065bd Events : 0.181092 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 1 - faulty spare /dev/sda1 bash-3.2# /sbin/mdadm --remove /dev/md0 /dev/sda1 mdadm: hot removed /dev/sda1 bash-3.2# /sbin/mdadm -D /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Aug 15 13:26:52 2010 Raid Level : raid1 Array Size : 976562432 (931.32 GiB 1000.00 GB) Used Dev Size : 976562432 (931.32 GiB 1000.00 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jan 22 13:35:33 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : f1330325:7aaf8fc4:c524c7d5:22e065bd Events : 0.181102 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 bash-3.2# /sbin/fdisk /dev/sda The number of cylinders for this disk is set to 243201. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): p Disk /dev/sda: 2000.3 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 243153 1953125000 fd Linux raid autodetect Command (m for help): d Selected partition 1 Command (m for help): p Disk /dev/sda: 2000.3 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-243201, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-243201, default 243201): Using default value 243201 Command (m for help): t Selected partition 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): p Disk /dev/sda: 2000.3 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 243201 1953512001 fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. bash-3.2# /sbin/fdisk /dev/sda The number of cylinders for this disk is set to 243201. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): p Disk /dev/sda: 2000.3 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 243201 1953512001 fd Linux raid autodetect Command (m for help): q bash-3.2# /sbin/mdadm --add /dev/md0 /dev/sda1 mdadm: added /dev/sda1 bash-3.2# /sbin/mdadm -D /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Aug 15 13:26:52 2010 Raid Level : raid1 Array Size : 976562432 (931.32 GiB 1000.00 GB) Used Dev Size : 976562432 (931.32 GiB 1000.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jan 22 13:42:22 2013 State : clean, degraded, recovering Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Rebuild Status : 0% complete UUID : f1330325:7aaf8fc4:c524c7d5:22e065bd Events : 0.181250 Number Major Minor RaidDevice State 2 8 1 0 spare rebuilding /dev/sda1 1 8 17 1 active sync /dev/sdb1 bash-3.2# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] md0 : active raid1 sda1[2] sdb1[1] 976562432 blocks [2/1] [_U] [&#x26;gt;....................] recovery = 0.6% (6342592/976562432) finish=115.3min speed=140204K/sec unused devices: &#x26;lt;none&#x26;gt; </code></pre> <ol start="21"> <li>So now it's a few hours later and the RAID is up and running again with the two disks now re-partitioned. The RAID itself however still thinks it's operating with the original disk size. So tell it to grow to fill the entire available space. Again, this will take a few hours.</li> </ol> <pre><code>bash-3.2# /sbin/mdadm --grow /dev/md0 -z max </code></pre> <ol start="22"> <li>Growth progress can be checked with <code>cat /proc/mdstat</code>, or alternatively, use the <code>--wait</code> option to pause the command prompt from executing the next command until RAID growth is complete:</li> </ol> <pre><code>bash-3.2# /sbin/mdadm --wait /dev/md0 </code></pre> <ol start="23"> <li>Once it is complete, the filesystem on the virtual disk needs to be grown to fill it. This is a relatively quick operation.</li> </ol> <pre><code>bash-3.2# /usr/sbin/xfs_growfs -D max /dev/md0 meta-data=/dev/md0 isize=256 agcount=32, agsize=7629394 blks = sectsz=512 attr=0 data = bsize=4096 blocks=244140608, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=65536 blocks=0, rtextents=0 data blocks changed from 244140608 to 488377984 </code></pre> <ol start="24"> <li>Now take a look at mounted disk space. <code>/home</code> should now show plenty of free space!</li> </ol> <pre><code>bash-3.2# df -m Filesystem 1M-blocks Used Available Use% Mounted on rootfs 212 158 54 75% / ubi0:rootfs 212 158 54 75% / none 62 1 62 1% /dev nodev 62 1 62 1% /var/log nodev 62 1 62 1% /mnt/tmpfs nodev 62 0 62 0% /var/lib/php/session nodev 1907599 631836 1275764 34% /tmp nodev 62 1 62 1% /var/run nodev 62 1 62 1% /var/cache nodev 62 1 62 1% /var/lib/axentra_sync nodev 62 1 62 1% /var/lib/oe-admin/minions nodev 62 1 62 1% /var/lib/oe-admin/actions nodev 62 1 62 1% /var/lib/oe-update-checker nodev 62 1 62 1% /etc/blkid nodev 62 1 62 1% /var/lib/dbus nodev 62 1 62 1% /var/lib/dhclient nodev 62 1 62 1% /var/lock nodev 62 1 62 1% /var/spool nodev 62 1 62 1% /etc/dhclient-eth0.conf nodev 62 1 62 1% /etc/printcap nodev 62 1 62 1% /etc/resolv.conf /dev/md0 1907599 631836 1275764 34% /home /dev/md0 1907599 631836 1275764 34% /tmp /dev/md0 1907599 631836 1275764 34% /var/cache/mt-daapd </code></pre>
<![CDATA[DVD Region Unlocking a Samsung HT-E5550W Home Theatre]]>With thanks to "choochoo" on Whirlpool, I found the following notes on how to DVD region unlock a Samsung HT-E5550W home theatre.

]]>
https://dan.makovec.netdvd-region-unlocking-a-samsung-ht-e5550w-home-theatrehttps://dan.makovec.netdvd-region-unlocking-a-samsung-ht-e5550w-home-theatreSun, 20 Jan 2013 07:09:00 GMT<ol> <li>Ensure that there is no disk in the tray.</li> <li>Switch the player off and then back on. This is necessary to clean-boot the firmware.</li> <li>Wait until the main menu page appears and the player stops doing anything.</li> <li>Press the eject button to open the tray. (The button is between the "Power" and "TV Power" buttons at the top of the controller.)</li> <li>Press the eject button to close the tray.</li> <li>Wait while the player searches for a disk.</li> <li>As soon as the "No Disk" message appears at the top right corner do the following.</li> <li>Press the REPEAT button – just above the TV channel selector at the bottom right of the controller.</li> <li>Enter 76884 on the number pad. This is The Code If You Have A Region 4 Player (Other Region Codes Given Below)</li> <li>The region code "4" should appear at the top right corner of the screen.(this just flashed on then off, so keep an eye on the screen)</li> <li>If you see "4" over the top right corner, quickly enter "9" to make the player region free.</li> <li>Power down and power up again.</li> <li>Play a foreign DVD to verify the change.</li> </ol> <p>Other Region Codes To Be Used In Step 9 If You Dont Have a Region 4 Player</p> <p>1 – 2 9 3 3 4</p> <p>2 – 5 7 5 3 8</p> <p>3 – 5 6 7 3 2</p> <p>4 – 7 6 8 8 4</p> <p>5 – 5 3 8 1 4</p> <p>6 – 2 4 4 6 2</p>