One Year

Everyone who knows better than me is telling me just how volatile and confused this time can be. I spoke with someone today I’ve been wanting to speak with, but shy about. Partly I assumed that, since he seemed likable, he is probably popular and can’t handle talking to yet another person.

But instead he spoke with me in a more connected and attentive way than anyone has in a long time. Maybe that was just for today, but his presence gave me something to aspire to. Which I had sensed when I first heard him speak a year ago, which is why I wanted to talk with him, and also why I was terrified. People of presence are frightening to the ego?

He spoke to me of this period being like you’re trying to cross an ocean from New York to England, and you’ve made it half way, and you’re tired and it’s challenging and you maybe just want to turn back, you maybe are finding a lot of unexpected troubles come up and you maybe are finding it difficult to even stay afloat. But he urged me: keep going. Pick up the oars that brought you this far and keep going. Which reminds me of a mantra that Josh Korda uses that I like, “I love you, keep going.”

He told me something else that I needed to hear. He said that we get into relationship with others in order to heal. I take that to mean both “to heal others” and “to be healed.”

This jives with my past life’s casual reading into kinship, where my favorite framework is John Borneman’s that it’s about human needs “to care and be cared for.” This is the benefit of the secure connections Josh talks about a lot. Which touches on the psychological/spiritual experience of individuals around kinship, where we experience limerance and attachment and sympathy and bonding in friendships as much as romantic entanglements. All of which serve to put us in relation, in connection, in order to care for each other. Past traumas and caregivers who couldn’t be there for us leave us with poor defenses and strategies that hobble our ability to connect with others, and basically make the mutual caring of kinship synonymous with mutual healing.

It sets up a catch-22 where we may have difficulty connecting with others, yet that connection is the thing that can heal us best towards being able to connect better.

In the past I have mainly connected in the abstract to these ideas, but bringing it home to my personal life and the feelings of healing, support of growth, and care that I have experienced in good relationships, allows me to more personally and mindfully state that it’s true, I seek to be connected in order to heal and be healed. And that this should be my relationship to myself, as well. This is a powerful thing to be aware of.

My new friend today reminded me so, so kindly of just how much things are not about me — of just how much other people have their own stories and their own healing trajectories. This is a compassion I seek to embody more automatically in my life as time goes on.

It is helpful to be readjusted to the idea that I am not here to cling to things that bring me pleasure, but to serve and heal others and to receive through the giving. And to rest in the flow of what is, not thrash about creating turbulence. I know if I start lingering over questions of “what might I lose?” I am doing something wrong; the counter-intuitive antidote, for my own well-being, should be, “what might I give?”

Thus begins part of my homework for year two.

Review: The Bodhisattva’s Brain : Buddhism Naturalized

The Bodhisattva's Brain : Buddhism Naturalized
The Bodhisattva’s Brain : Buddhism Naturalized by Owen Flanagan

My rating: 4 of 5 stars

I found Flanagan’s writing clear and enjoyable, and his exploration of Buddhism from a naturalist philosopher’s perspective rather elucidating. I particularly found it enlightening to find that I did not like when he calls to question whether Buddhist epistemology necessarily leads to Buddhist-style ethics, because I want to believe they do, but that’s not very Buddhist of me, to be so attached to the believe that there can and should be an “ought” that everyone can find their way to. It’s also not very “me,” in that I actually believe people can be such different phenomena in the world that not everyone *can* “find their way” to the same solutions — for example, the Buddhist “solution” to living is going to work better for someone of average mental health than for someone who struggles with the sort of chemically/physically-based depression or other psychological situations that some people struggle with.

I still feel that there’s something missing in Flanagan’s work, something I can’t put my finger on about a “correct” view of anatman/no-self and its link with compassion. He does allow for the possibility that compassion should come up just because we happen to be the sort of beings for whom compassion and social engagement makes us flourish better — and I think that’s true (anthropology and psychology seem to corroborate it) — but he’s correct to point out that isn’t part of the Buddhist argument. It should be added to the Buddhist worldview, but it isn’t classically part of it. I guess I think part of the reason a “correct perception” of impermanence and no-self should generally lead to compassion is that, usually, human beings who seek enlightenment are, to some degree, interested in alleviating some suffering, or dealing with some pain, and a certain understanding of impermanence and no-self can (should?) engender self-compassion, self-forgiveness, self-kindness and gratitude which can then be easily directed outward to others. One should, theoretically, not jump to sadness and nihilism because, I think, the idea of impermanence and an understanding of the origins and causes of “dukkha” is intended to be a *relief.* This relief allows us to be kinder to ourselves, and then to others.

In any case, I highly recommend this book to anyone with an interest in Buddhism and philosophy, especially those who lean “agnostic or atheist or naturalist” or what-have-you when it comes to spirituality or religion.



View all my reviews

Things learned about ZFS and why I’m still using it

I was about to give up on ZFS because I was having some drive issues, but I think that was just because they were behind a cut-rate SATA port multiplier, as I never had issues with the ones that are connected to my other computer through a quality (well, still budget, but good) SAS card. It was also tempting to try aufs again, with snapraid, so I could make use of some smaller drives, but ultimately the extra space was a marginal benefit in comparison with certain features gained from ZFS.

So I’m still on ZFS — though I’m just doing raidz2 this time, there’s something I’d been reading about optimal vdev size being 2^N+1 for raidz1, 2^N+2 for raidz2, and 2&N+3 for raidz3, and, so, I figured I’d try to do the optimal setup with what I had.

ZFS is so compelling because its snapshot and backup features are brilliant. Having installed zfs-auto-snapshot, I get hourly, daily, weekly, monthly, and every-15-minutes snapshots automatically, meaning if I accidentally delete anything I can quickly copy it out of a hidden snapshot, and these are differential so they take up hardly any space.

Furthermore, zfs send/receive, built in to the tools, allows me to send the whole filesystem, with snapshots, to my backup server, and incrementally to boot, for better-than-rsync backups. You can do this manually with one snapshot to get a straight copy backup, and pipe to pv and netcat for a progress meter (modified from here):

On sending machine:

zfs send storage@zfs-auto-snap_daily-2013-09-04-0759 | pv -s 4597499264K | nc -l 3333

On receiving machine:

nc 192.168.47.2 3333 | pv -s 4597499264K | zfs receive storage

The number after “pv -s” is the size of the data to be transfered, in kilobytes, which one can obtain by running “df -Bk” on the sending machine and copying the “Used” size given for your pool. If you have a number of snapshots, as I do, this will actually be higher than the actual data you’ll be transferring, but not grossly so, and anyway, that just means you’ll be done sooner than the indicator says, so that’s exciting, no?

That’s just for a quick-and-dirty filesystem copy from one machine to another. For daily use I found I was able to modify and use this backup script for nightly backups of the day’s changes, including snapshots.

So, easy backups, plus ZFS is built to checksum your data, so with weekly checks and daily scripts to monitor your hard disks and data sets, you’re pretty safe from data loss.

Daemonize CopyConsole as a Service in Ubuntu Linux

This is a script to run “CopyConsole” as a service, using an init.d script, in linux. I have tested this on Ubuntu 13.04.

Background:

“CopyConsole” is the non-gui agent for the Copy cloud storage service by Barracuda Networks; Copy is similar to Dropbox. In fact, I adapted this from the init.d script I’m using for running a headless dropbox daemon on the same server, so it’s very simple. I am only interested in running these services under one account, so the scripts are not flexible for multiple users, but they could easily be modified to be so; just have a look at the default Dropbox init.d script they provide.

NOTE:

This assumes you already have Copy set up for your user. This does not cover the setup & authorization of Copy.

But despite not using any of CopyConsole’s options, it is accurately using the correct auth and even the correct folder location for my Copy directory, which I set up in a non-standard place.

The Script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
#!/bin/sh
# CopyConsole (Copy cloud storage by Barracuda) service
 
start() {
   echo "Starting CopyConsole..."
       if [ -x /home/adam/.copy/CopyConsole ]; then
           HOME="/home/adam" start-stop-daemon -b -o -c adam -S -u adam -x /home/adam/.copy/CopyConsole -- -daemon
       fi
}
 
stop() {
   echo "Stopping CopyConsole..."
       if [ -x /home/adam/.copy/CopyConsole ]; then
           start-stop-daemon -o -c adam -K -u adam -x /home/adam/.copy/CopyConsole
       fi
}
 
status() {
       dbpid=`pgrep -u adam CopyConsole`
       if [ -z $dbpid ] ; then
           echo "CopyConsole for USER adam: not running."
       else
           echo "CopyConsole for USER adam: running (pid $dbpid)"
       fi
}
 
case "$1" in
 
   start)
       start
       ;;
   stop)
       stop
       ;;
   restart|reload|force-reload)
       stop
       start
       ;;
   status)
       status
       ;;
   *)
       echo "Usage: /etc/init.d/copyconsole {start|stop|reload|force-reload|restart|status}"
       exit 1
 
esac
 
exit 0

XBMC and Netflix on Windows 8 with Xpadder (and Kylo alternative)

I made this work for me by a limited solution I’m quite happy with. I wish Netflix would 1) support Linux beyond Android, and 2) allow API access for integration with services like XBMC, but in the world I actually live in, I settled on Windows and the Windows 8 “Metro” app version of Netflix. I was going to settle on Windows anyway because I also want to play games launched by XBMC or Steam Big Picture.

Because I also wanted games, this made the solution pretty simple. I have an XBox 360 Wireless controller for gaming, which XBMC 12 natively supports as a remote control. All I needed to do was configure the computer so the controller input could also interact with the Netflix metro app (this will work for any metro app, I just have to create a launcher configuration for it).

I initially tried to use Pinnacle Game Profiler, but I was having issues with the “default” windows profile, and then trying to activate it by command line from AutoHotKey kept causing Pinnacle to crash. So I decided to try out Xpadder… I like Xpadder because it’s smaller, simpler, and has better (for me) command line support.

So here’s how this works:

  1. Create an Xpadder config that I’m happy with for operating the Netflix Win 8 app. I chose to make the left stick operate the mouse, the right stick operate the volume (as it does in XBMC), the A/B buttons the Left/Right mouse buttons, and then the D-pad does the mouse wheel up/down for left-right app scrolling (hmm, maybe I should switch that to left-right D-pad?), left D-pad for navigating “back,” and a couple other buttons for mute and something else I forget.
    ** The key here is setting the “Back” button to send the “Esc” key. This I have configured in AutoHotKey for exiting Netflix and returning to XBMC.
  2. I want to start this all from Advanced Launcher in XBMC, so I want to use AutoHotKey to set up a script. That means I need a command-line way to launch Netflix with the XPadder config. XPadder can do this, but it requires a Windows shortcut (.LNK file) to run whatever program you want it to launch. I was having difficulty getting it to use a simple URL-based shortcut; one can create one for the Netflix App by having it launch “netflix://” but that opens an IE window first. I probably could have created a batch file or something, but hey, I’m using AutoHotkey anyway! So:
    1. Create simple Autohotkey script to “just” launch the Netflix Win 8 app:

      stringtrimright, exevar, A_ScriptName,4
      run, netflix://
    2. Compile that script
    3. Create Windows Shortcut to run the compiled Autohotkey script
    4. Note Xpadder command line to launch Xpadder config & launch Netflix:
      D:\XPadder\Xpadder.exe "Netflix8" "D:\XPadder\NetflixWin8.lnk" /M
    5. Test it… It works!
  3. Now that I’m ready to launch the Netflix 8 app the way I want it, I need to set up XBMC. For that I need another Autohotkey script to 1) launch Xpadder with the noted command line, 2) watch for “Esc” key, 3) close Xpadder/Netflix and switch to XBMC when Esc is pressed. XPadder also provides a command-line way to close itself and, I think, its launched program, but just in case it doesn’t close netflix, I’m also telling the Autohotkey to do so. Here’s my script.
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    
    stringtrimright, exevar, A_ScriptName,4
    run, D:\XPadder\Xpadder.exe "Netflix8" "D:\XPadder\NetflixWin8.lnk" /M
    Esc::
    ifwinexist, ahk_class XBMC
    {
    WinClose, Netflix
    run, D:\XPadder\Xpadder.exe "Netflix8" "D:\XPadder\NetflixWin8.lnk" /C
    WinActivate, ahk_class XBMC
    exitapp
    }
    else
    {
    WinClose, Netflix
    run, D:\XPadder\Xpadder.exe "Netflix8" "D:\XPadder\NetflixWin8.lnk" /C
    run "C:\Program Files (x86)\XBMC\xbmc.exe"
    sleep, 1000
    WinActivate, ahk_class XBMC
    exitapp
    }
  4. With that all set and tested a few times, XBMC will launch XPadder/Netflix, I can browse and play things, and hit that Back button to quit and return to XBMC.

    Without Windows 8 (I’m actually running Windows Server 2012 for this, but that’s for Hyper-V and multi-session remote desktop support, and a subject for a different post), one wouldn’t be able to run the Netflix App, but one could use 1) the Windows Media Center Netflix plugin, whose interface isn’t as good and which I don’t think supports Netflix “Super HD,” or 2) launch a web browser to Netflix. Option #2 could also be combined with XPadder for the same sort of control.

    In fact, on Windows, a good browser for this is Kylo, a Mozilla/Firefox derivative designed for TV use (so-called “10-foot interfaces“). In fact, I set up the exact same sequences of launch events via AutoHotKey & XPadder, even using the same XPadder config, to launch Kylo for occasional browsing, and XBMC launches it like a champ.

    On Ubuntu, I guess I’d do the same thing but with the wine-compholio package for ubuntu to run Wine/Firefox/Netflix, and some sort of temporary keymapping for the controller. But since I wanted maximum games, Remote Desktop, Hyper-V, and a few other Windows features, I stuck with it.

    Ideally, I wish I had a Windows- or Linux-compatible app-like interface for Netflix that looked and worked the same as the Netflix PS3 or Google TV interfaces, which are both about the same. I like that interface and it’s much better for couch-based browsing, but alas, there’s no such dice for windows, and give all the other capabilities this setup affords me, I’m not going to also buy a streaming box or a separate gaming console. Although maybe I’ll live with some people one day for whom such solutions will be preferred….

Aluminum Can Houses

800px-EMPTY_STEEL_BEER_AND_SOFT_DRINK_CANS_ARE_BEING_USED_TO_BUILD_EXPERIMENTAL_HOUSING_NEAR_TAOS,_NEW_MEXICO._DESIGNER..._-_NARA_-_556618-1

When I was younger, I used to read National Geographic World magazine. Once they ran a story about, I think, houses made of clay and aluminum cans. I wanted to be the kind of person who would know how to do that, when to do it, why to do it. I wanted to live in an aluminum can house, or at least meet people who did and learn about them.

Then something happened and I learned to “grow up” and hate the world.

Lately I am starting to feel like a kid again who wants to live in aluminum can houses. Or, rather, I don’t really want to live in an aluminum can house, necessarily, I just want to live with the attitude of 1) people who might, and 2) the kid who wondered at such things and wanted to learn & do more about them.

ZFS, not MDADM, finally.

So, nevermind the mdadm part of that last post! I was fine with mdadm taking forever to perform its initial sync, but on reboot it was constantly degraded. I think it might have had something to do with Ubuntu 13.04’s boot order with my SAS PCIe card—meaning, I hypothesize that it wasn’t mounting all the drives before trying to reassemble the RAID device… but I’m not sure and I didn’t try to resolve that, because I had a better idea:

ZFS-on-linux. OK, it’s not as flexible as mdadm, or aufs or mhddfs, in terms of adding drives to expand storage, but I have plenty of storage right now. I don’t foresee any need to expand or add storage for two years, minimum. In two years, if for some reason I really need more storage, I may as well just buy a commercial NAS at that point.

For now, ZFS is working pretty great. It’s faster to set up than mdadm, though a little harder to find documentation online. So for that reason, I’m jotting down these notes.

I used zfs, with raidz3 as advised online (multiple howtos and faqs (here’s one) said a 10-drive array was probably best with triple-redundancy, and that was fine with me).

I used the “ashift=12″ option because all my drives are 4k masquerading as 512b (see here).

My create command:

zpool create -o ashift=12 storage raidz3 scsi-SATA_ST3000DM001-1CH_Z1F2HYRG scsi-SATA_ST3000DM001-1CH_Z1F2J0Z0 scsi-SATA_ST3000DM001-1CH_Z1F2TS6K scsi-SATA_ST3000DM001-1CH_Z1F2WXBY scsi-SATA_ST3000DM001-1CH_Z1F2X155 scsi-SATA_ST3000DM001-9YN_W1F010J3 scsi-SATA_ST3000DM001-9YN_W1F17M4L scsi-SATA_ST3000DM001-9YN_W1F18AMJ scsi-SATA_ST3000DM001-9YN_W1F19G1K scsi-SATA_ST3000DM001-9YN_Z1F16VP8

I was having problems with traditional nfs export mounting “rw” properly, perhaps because of the “fsid” command, but that’s only in hindsight. To set it up according to zfsonlinux documentation I did:

1) create dummy export in /etc/exports of ‘/mnt localhost(ro)’
2) run: zfs set sharenfs=”rw=@192.168.47.0/24,all_squash,insecure,anonuid=1000,anongid=1000,no_subtree_check,nohide,sync” storage
this uses options I had used for a standard /etc/exports
3) run: touch /etc/init.d/.legacy-bootordering
4) edit /etc/default/zfs to: zfs_mount=’yes’ and zfs_share=’yes’

Note that I have no datasets, I’m just using the base zpool. I guess I’m just a “drop it in a giant bin” kind of guy. This seems to be working so far. In any case, I have backup.