Review: The Bodhisattva’s Brain : Buddhism Naturalized

The Bodhisattva's Brain : Buddhism Naturalized
The Bodhisattva’s Brain : Buddhism Naturalized by Owen Flanagan

My rating: 4 of 5 stars

I found Flanagan’s writing clear and enjoyable, and his exploration of Buddhism from a naturalist philosopher’s perspective rather elucidating. I particularly found it enlightening to find that I did not like when he calls to question whether Buddhist epistemology necessarily leads to Buddhist-style ethics, because I want to believe they do, but that’s not very Buddhist of me, to be so attached to the believe that there can and should be an “ought” that everyone can find their way to. It’s also not very “me,” in that I actually believe people can be such different phenomena in the world that not everyone *can* “find their way” to the same solutions — for example, the Buddhist “solution” to living is going to work better for someone of average mental health than for someone who struggles with the sort of chemically/physically-based depression or other psychological situations that some people struggle with.

I still feel that there’s something missing in Flanagan’s work, something I can’t put my finger on about a “correct” view of anatman/no-self and its link with compassion. He does allow for the possibility that compassion should come up just because we happen to be the sort of beings for whom compassion and social engagement makes us flourish better — and I think that’s true (anthropology and psychology seem to corroborate it) — but he’s correct to point out that isn’t part of the Buddhist argument. It should be added to the Buddhist worldview, but it isn’t classically part of it. I guess I think part of the reason a “correct perception” of impermanence and no-self should generally lead to compassion is that, usually, human beings who seek enlightenment are, to some degree, interested in alleviating some suffering, or dealing with some pain, and a certain understanding of impermanence and no-self can (should?) engender self-compassion, self-forgiveness, self-kindness and gratitude which can then be easily directed outward to others. One should, theoretically, not jump to sadness and nihilism because, I think, the idea of impermanence and an understanding of the origins and causes of “dukkha” is intended to be a *relief.* This relief allows us to be kinder to ourselves, and then to others.

In any case, I highly recommend this book to anyone with an interest in Buddhism and philosophy, especially those who lean “agnostic or atheist or naturalist” or what-have-you when it comes to spirituality or religion.



View all my reviews

Things learned about ZFS and why I’m still using it

I was about to give up on ZFS because I was having some drive issues, but I think that was just because they were behind a cut-rate SATA port multiplier, as I never had issues with the ones that are connected to my other computer through a quality (well, still budget, but good) SAS card. It was also tempting to try aufs again, with snapraid, so I could make use of some smaller drives, but ultimately the extra space was a marginal benefit in comparison with certain features gained from ZFS.

So I’m still on ZFS — though I’m just doing raidz2 this time, there’s something I’d been reading about optimal vdev size being 2^N+1 for raidz1, 2^N+2 for raidz2, and 2&N+3 for raidz3, and, so, I figured I’d try to do the optimal setup with what I had.

ZFS is so compelling because its snapshot and backup features are brilliant. Having installed zfs-auto-snapshot, I get hourly, daily, weekly, monthly, and every-15-minutes snapshots automatically, meaning if I accidentally delete anything I can quickly copy it out of a hidden snapshot, and these are differential so they take up hardly any space.

Furthermore, zfs send/receive, built in to the tools, allows me to send the whole filesystem, with snapshots, to my backup server, and incrementally to boot, for better-than-rsync backups. You can do this manually with one snapshot to get a straight copy backup, and pipe to pv and netcat for a progress meter (modified from here):

On sending machine:

zfs send storage@zfs-auto-snap_daily-2013-09-04-0759 | pv -s 4597499264K | nc -l 3333

On receiving machine:

nc 192.168.47.2 3333 | pv -s 4597499264K | zfs receive storage

The number after “pv -s” is the size of the data to be transfered, in kilobytes, which one can obtain by running “df -Bk” on the sending machine and copying the “Used” size given for your pool. If you have a number of snapshots, as I do, this will actually be higher than the actual data you’ll be transferring, but not grossly so, and anyway, that just means you’ll be done sooner than the indicator says, so that’s exciting, no?

That’s just for a quick-and-dirty filesystem copy from one machine to another. For daily use I found I was able to modify and use this backup script for nightly backups of the day’s changes, including snapshots.

So, easy backups, plus ZFS is built to checksum your data, so with weekly checks and daily scripts to monitor your hard disks and data sets, you’re pretty safe from data loss.

Daemonize CopyConsole as a Service in Ubuntu Linux

This is a script to run “CopyConsole” as a service, using an init.d script, in linux. I have tested this on Ubuntu 13.04.

Background:

“CopyConsole” is the non-gui agent for the Copy cloud storage service by Barracuda Networks; Copy is similar to Dropbox. In fact, I adapted this from the init.d script I’m using for running a headless dropbox daemon on the same server, so it’s very simple. I am only interested in running these services under one account, so the scripts are not flexible for multiple users, but they could easily be modified to be so; just have a look at the default Dropbox init.d script they provide.

NOTE:

This assumes you already have Copy set up for your user. This does not cover the setup & authorization of Copy.

But despite not using any of CopyConsole’s options, it is accurately using the correct auth and even the correct folder location for my Copy directory, which I set up in a non-standard place.

The Script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
#!/bin/sh
# CopyConsole (Copy cloud storage by Barracuda) service
 
start() {
   echo "Starting CopyConsole..."
       if [ -x /home/adam/.copy/CopyConsole ]; then
           HOME="/home/adam" start-stop-daemon -b -o -c adam -S -u adam -x /home/adam/.copy/CopyConsole -- -daemon
       fi
}
 
stop() {
   echo "Stopping CopyConsole..."
       if [ -x /home/adam/.copy/CopyConsole ]; then
           start-stop-daemon -o -c adam -K -u adam -x /home/adam/.copy/CopyConsole
       fi
}
 
status() {
       dbpid=`pgrep -u adam CopyConsole`
       if [ -z $dbpid ] ; then
           echo "CopyConsole for USER adam: not running."
       else
           echo "CopyConsole for USER adam: running (pid $dbpid)"
       fi
}
 
case "$1" in
 
   start)
       start
       ;;
   stop)
       stop
       ;;
   restart|reload|force-reload)
       stop
       start
       ;;
   status)
       status
       ;;
   *)
       echo "Usage: /etc/init.d/copyconsole {start|stop|reload|force-reload|restart|status}"
       exit 1
 
esac
 
exit 0

XBMC and Netflix on Windows 8 with Xpadder (and Kylo alternative)

I made this work for me by a limited solution I’m quite happy with. I wish Netflix would 1) support Linux beyond Android, and 2) allow API access for integration with services like XBMC, but in the world I actually live in, I settled on Windows and the Windows 8 “Metro” app version of Netflix. I was going to settle on Windows anyway because I also want to play games launched by XBMC or Steam Big Picture.

Because I also wanted games, this made the solution pretty simple. I have an XBox 360 Wireless controller for gaming, which XBMC 12 natively supports as a remote control. All I needed to do was configure the computer so the controller input could also interact with the Netflix metro app (this will work for any metro app, I just have to create a launcher configuration for it).

I initially tried to use Pinnacle Game Profiler, but I was having issues with the “default” windows profile, and then trying to activate it by command line from AutoHotKey kept causing Pinnacle to crash. So I decided to try out Xpadder… I like Xpadder because it’s smaller, simpler, and has better (for me) command line support.

So here’s how this works:

  1. Create an Xpadder config that I’m happy with for operating the Netflix Win 8 app. I chose to make the left stick operate the mouse, the right stick operate the volume (as it does in XBMC), the A/B buttons the Left/Right mouse buttons, and then the D-pad does the mouse wheel up/down for left-right app scrolling (hmm, maybe I should switch that to left-right D-pad?), left D-pad for navigating “back,” and a couple other buttons for mute and something else I forget.
    ** The key here is setting the “Back” button to send the “Esc” key. This I have configured in AutoHotKey for exiting Netflix and returning to XBMC.
  2. I want to start this all from Advanced Launcher in XBMC, so I want to use AutoHotKey to set up a script. That means I need a command-line way to launch Netflix with the XPadder config. XPadder can do this, but it requires a Windows shortcut (.LNK file) to run whatever program you want it to launch. I was having difficulty getting it to use a simple URL-based shortcut; one can create one for the Netflix App by having it launch “netflix://” but that opens an IE window first. I probably could have created a batch file or something, but hey, I’m using AutoHotkey anyway! So:
    1. Create simple Autohotkey script to “just” launch the Netflix Win 8 app:

      stringtrimright, exevar, A_ScriptName,4
      run, netflix://
    2. Compile that script
    3. Create Windows Shortcut to run the compiled Autohotkey script
    4. Note Xpadder command line to launch Xpadder config & launch Netflix:
      D:\XPadder\Xpadder.exe "Netflix8" "D:\XPadder\NetflixWin8.lnk" /M
    5. Test it… It works!
  3. Now that I’m ready to launch the Netflix 8 app the way I want it, I need to set up XBMC. For that I need another Autohotkey script to 1) launch Xpadder with the noted command line, 2) watch for “Esc” key, 3) close Xpadder/Netflix and switch to XBMC when Esc is pressed. XPadder also provides a command-line way to close itself and, I think, its launched program, but just in case it doesn’t close netflix, I’m also telling the Autohotkey to do so. Here’s my script.
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    
    stringtrimright, exevar, A_ScriptName,4
    run, D:\XPadder\Xpadder.exe "Netflix8" "D:\XPadder\NetflixWin8.lnk" /M
    Esc::
    ifwinexist, ahk_class XBMC
    {
    WinClose, Netflix
    run, D:\XPadder\Xpadder.exe "Netflix8" "D:\XPadder\NetflixWin8.lnk" /C
    WinActivate, ahk_class XBMC
    exitapp
    }
    else
    {
    WinClose, Netflix
    run, D:\XPadder\Xpadder.exe "Netflix8" "D:\XPadder\NetflixWin8.lnk" /C
    run "C:\Program Files (x86)\XBMC\xbmc.exe"
    sleep, 1000
    WinActivate, ahk_class XBMC
    exitapp
    }
  4. With that all set and tested a few times, XBMC will launch XPadder/Netflix, I can browse and play things, and hit that Back button to quit and return to XBMC.

    Without Windows 8 (I’m actually running Windows Server 2012 for this, but that’s for Hyper-V and multi-session remote desktop support, and a subject for a different post), one wouldn’t be able to run the Netflix App, but one could use 1) the Windows Media Center Netflix plugin, whose interface isn’t as good and which I don’t think supports Netflix “Super HD,” or 2) launch a web browser to Netflix. Option #2 could also be combined with XPadder for the same sort of control.

    In fact, on Windows, a good browser for this is Kylo, a Mozilla/Firefox derivative designed for TV use (so-called “10-foot interfaces“). In fact, I set up the exact same sequences of launch events via AutoHotKey & XPadder, even using the same XPadder config, to launch Kylo for occasional browsing, and XBMC launches it like a champ.

    On Ubuntu, I guess I’d do the same thing but with the wine-compholio package for ubuntu to run Wine/Firefox/Netflix, and some sort of temporary keymapping for the controller. But since I wanted maximum games, Remote Desktop, Hyper-V, and a few other Windows features, I stuck with it.

    Ideally, I wish I had a Windows- or Linux-compatible app-like interface for Netflix that looked and worked the same as the Netflix PS3 or Google TV interfaces, which are both about the same. I like that interface and it’s much better for couch-based browsing, but alas, there’s no such dice for windows, and give all the other capabilities this setup affords me, I’m not going to also buy a streaming box or a separate gaming console. Although maybe I’ll live with some people one day for whom such solutions will be preferred….

Aluminum Can Houses

800px-EMPTY_STEEL_BEER_AND_SOFT_DRINK_CANS_ARE_BEING_USED_TO_BUILD_EXPERIMENTAL_HOUSING_NEAR_TAOS,_NEW_MEXICO._DESIGNER..._-_NARA_-_556618-1

When I was younger, I used to read National Geographic World magazine. Once they ran a story about, I think, houses made of clay and aluminum cans. I wanted to be the kind of person who would know how to do that, when to do it, why to do it. I wanted to live in an aluminum can house, or at least meet people who did and learn about them.

Then something happened and I learned to “grow up” and hate the world.

Lately I am starting to feel like a kid again who wants to live in aluminum can houses. Or, rather, I don’t really want to live in an aluminum can house, necessarily, I just want to live with the attitude of 1) people who might, and 2) the kid who wondered at such things and wanted to learn & do more about them.

ZFS, not MDADM, finally.

So, nevermind the mdadm part of that last post! I was fine with mdadm taking forever to perform its initial sync, but on reboot it was constantly degraded. I think it might have had something to do with Ubuntu 13.04’s boot order with my SAS PCIe card—meaning, I hypothesize that it wasn’t mounting all the drives before trying to reassemble the RAID device… but I’m not sure and I didn’t try to resolve that, because I had a better idea:

ZFS-on-linux. OK, it’s not as flexible as mdadm, or aufs or mhddfs, in terms of adding drives to expand storage, but I have plenty of storage right now. I don’t foresee any need to expand or add storage for two years, minimum. In two years, if for some reason I really need more storage, I may as well just buy a commercial NAS at that point.

For now, ZFS is working pretty great. It’s faster to set up than mdadm, though a little harder to find documentation online. So for that reason, I’m jotting down these notes.

I used zfs, with raidz3 as advised online (multiple howtos and faqs (here’s one) said a 10-drive array was probably best with triple-redundancy, and that was fine with me).

I used the “ashift=12″ option because all my drives are 4k masquerading as 512b (see here).

My create command:

zpool create -o ashift=12 storage raidz3 scsi-SATA_ST3000DM001-1CH_Z1F2HYRG scsi-SATA_ST3000DM001-1CH_Z1F2J0Z0 scsi-SATA_ST3000DM001-1CH_Z1F2TS6K scsi-SATA_ST3000DM001-1CH_Z1F2WXBY scsi-SATA_ST3000DM001-1CH_Z1F2X155 scsi-SATA_ST3000DM001-9YN_W1F010J3 scsi-SATA_ST3000DM001-9YN_W1F17M4L scsi-SATA_ST3000DM001-9YN_W1F18AMJ scsi-SATA_ST3000DM001-9YN_W1F19G1K scsi-SATA_ST3000DM001-9YN_Z1F16VP8

I was having problems with traditional nfs export mounting “rw” properly, perhaps because of the “fsid” command, but that’s only in hindsight. To set it up according to zfsonlinux documentation I did:

1) create dummy export in /etc/exports of ‘/mnt localhost(ro)’
2) run: zfs set sharenfs=”rw=@192.168.47.0/24,all_squash,insecure,anonuid=1000,anongid=1000,no_subtree_check,nohide,sync” storage
this uses options I had used for a standard /etc/exports
3) run: touch /etc/init.d/.legacy-bootordering
4) edit /etc/default/zfs to: zfs_mount=’yes’ and zfs_share=’yes’

Note that I have no datasets, I’m just using the base zpool. I guess I’m just a “drop it in a giant bin” kind of guy. This seems to be working so far. In any case, I have backup.

autofs, mhddfs, mdadm, and snapraid

I’m in the process of reconstructing my home servers, with more linux and greater backup and redundancy. One problem I ran into was choosing a way to pool my storage. I have a bunch of 3TB disks that, previously, were housed only in SansDigital 5-disk RAID cabinets, hooked up by eSATA-to-SATA bridge to my little mac mini. In that situation, the RAID boxes themselves were doing RAID-5 drive pooling.

This worked fine, and I’ll still be using those boxes for backup, but over a year ago I was looking at having all the drives in one server case, utilizing software RAID or some combination of drive pooling and “flexible raid” or SnapRAID. Personally, I don’t feel comfortable using heavily “minority” solutions. It feels like it’ll be harder to recover data in a catastrophic situation, so I prefer using solutions that many others use. It’s easier to find help online, at least. “Minority” for me is a feeling I get from reading about solutions, and some trial, rather than any strict metric. For me, it means “FlexRAID” is out, as is Microsoft’s storage spaces and related Windows-based pooling solutions.

But I digress. I want something that has a lot of widespread use, some history, and some redundancy. On linux, mdadm fits that bill, but over a year ago when I tried it last, I was too impatient to wait for the large RAID array to build. I was also doing a lot of trial of different linux variations, so it had to rebuild or at least reassemble every time I installed a new linux variant, and that either took time or broke the array. Plus I had flimsy backup. So I gave up and went for those SansDigital RAID boxes, which have really worked well.

This time around, I discovered how to use aufs, and was working on that as a solution, but before I even got to setting up the SnapRAID routines, I discovered a problem (for me) with an aufs feature called “whiteout” files. On the underlying filesystems that are pooled together, if they are read-only, deleted files are not actually deleted. Instead, aufs writes “whiteout” files so that the pooled filesystem appears without the “deleted” files, even though the “deleted” files still exist on the lower layer.

From what I understand, this shouldn’t happen in my situation. All my pooled filesystems are read-write! But, as I needed to re-install Ubuntu a few times to test various things, I kept needing to re-build the aufs pool, and I kept discovering files I’d previously deleted were still there. This was distressing. I probably shouldn’t have been deleting the whiteout files (.wh.*) on all the volumes, but 1) I wanted to start clean each time, and 2) I didn’t realize deleted files were sticking around. And 3) this could cause disk space problems down the line as space is not actually cleared.

I looked into some troubleshooting but I decided ultimately this was wasting my mental energy. Plus, all pooled filesystems, from what I read, have some problems with filesystem links, be they soft or hard, and I decided it would be nice to be able to make full use of rsnapshot and rsync –link-dest without worry, for various backup operations

I took a quick detour to try out mhddfs, but as reported elsewhere, it definitely hits a speed bottleneck. Copying from my original aufs-pooled SMB share over the network to a Windows workstation, I got a steady speed of over 100 MB/s. With mhddfs, I got an equal starter speed, which dropped to about 25 MB/s after half a minute or so.

So, I’m going back to mdadm. It’s going to take a while to build the array (1.5 days!), but I have more patience this time around, a good backup (mind, I’m using the backup as production in that time span, to keep my services active, but it was production for so long anyway, I think I can bet on 1.5 days more use, even in this heat), and I don’t need to reinstall linux any more. I have a bit more faith and a bit more confidence, too, from an extra year’s worth or so of linux understanding & experience.