Mr. Sanders has proposed a headline top tax rate of 52 percent, applying only to incomes over $10 million. But that’s just the federal income tax. When you combine it with other taxes that apply to income, like existing payroll taxes and new ones Mr. Sanders would impose to pay for Social Security, single-payer health care and family leave, and then add those on top of taxes levied by state governments, it would add up to a combined tax rate of over 73 percent on the highest incomes, more than 20 points higher than today. That’s in the average state — maximum rates in high-tax jurisdictions like California and New York City would be even higher.
While the economic theory might hold up, it assumes wealthy people don’t change their behavior. I’ll ask you: if you knew that any money you make after $10 million would be taxed so high, how hard would you work to try to make more than that? Or, gasp, might you consider moving to another country with a lower tax rate?
I know what you’re thinking: “Boo-hoo. Greed is bad. We don’t want people keeping what they make after $10 million anyways. That’s an ungodly sum of money and no one needs that. They’ll just have to make do with their paltry $9.99 million paychecks.”
One big reason this is such a problem is that a large amount of federal revenue is dependant upon higher-income earners. If we drive higher income earners out or disincentivize those earning more than a certain threshold, our models stop working. Longer-term, we drive people away from higher-income jobs. Sooner or later, we have a deficit. And guess who’s going to have to pay for the difference?
You guessed it. Everyone making less than $10 million per year. Which means higher taxes for pretty much everyone else.
My iMac has started resetting my mouse’s tracking speed upon every restart. While somewhat frustrating, it’s pretty easy to open up System Preferences -> Mouse, and update the tracking speed to one notch below “Fast” and get on with my work.
While it’s gotten a little old, it also got me to thinking: why does Apple measure mouse movement in terms of “Tracking speed”? And what is tracking speed, anyways?
After doing a “fair bit” of research (read: jumping to the mouse speed section on Wikipedia), I encountered an interestingly named measurement called “Mickeys per second” (tee hee). It makes some sense: according to Wikipedia, it measures “the ratio between how many pixels the cursor moves on the screen and how far the mouse moves on the mouse pad.”
While, at some point in the past, this might have been a completely sensible measurement, we’ve moved somewhat beyond pixels. Pixels used to be visible to the naked eye, but with today’s 4K and 5K displays, that’s no longer true. What also struck me was that, unless Mickeys per second could change with each display, setups with multiple displays would need a variable number of Mickeys per second to render a constant speed mouse pointer (at least in physical space). Obviously, behind the scenes, modern operating systems are flexible and take account of this, but this dynamic behavior is hidden from the end user.
Let’s go back to the beginning. Here I am, updating my tracking speed a couple times a week. When I do something more than once, my instinct is to find a way to stop doing it. Ultimately, I came to the conclusion that we (or really, Apple or Microsoft) are thinking about this in the wrong way.
Think about it. Mouse velocity comes down to three things:
The “reach” of the user’s hand (i.e., the maximum distance the center of the mouse sensor can be moved by the user from one side to the other).
The size of the screen.
The “intent” distance (i.e., the smallest intentional movement a user can make)
Without taking into user comfort level, the absolute minimum for this hypothetical measurement should be one screen per reach. No matter how good you are with computers, it’s a bad experience if you need to lift up your mouse several times to position the cursor in the right place. At maximum, it again needs to be user-specific. If mouse control is erratic or difficult for the user, the intent value should be larger than for someone with good hand dexterity.
Since we don’t want to move the mouse at all until the mouse has moved at least the intent distance, and we don’t want to move it less than the screen size, we can determine an upper bound and lower bound for mouse velocity. Furthermore, extracting these values doesn’t entail asking the user to drag a marker on a screen for some arbitrary indicator.
It would be relatively easy to determine these values automatically based on a simple tool: just ask the user to move the mouse from side-to-side, and then display a grid to the user, prompting them to click between two points as close to each other as possible. Since screen size is already known by the OS, it would just be a simple matter of crunching the numbers to an internal value.
There’s probably lots of bigger fish to fry on Apple’s Mac OS X team, but I think it’d be a huge improvement to the user experience and would make this setting a lot less opaque to end users.
Marco Arment released an iOS 9 Content Blocker on Wednesday and it quickly rocketed up to #1 on the Paid App Store charts. At that ranking, apps can pull in tens of thousands of dollars per day.
For very good reasons, he removed it from sale this morning. And then things got nasty.
Thing is, I think I understand what this week has been like for Marco. In a lot of ways, it reminds me of what happened when a site I wrote went viral a few years ago.
If you’ve never made something that’s gone viral before, let me break down how it feels like for the maker:
You’re in shock. You can’t believe something you made resonated with so many people.
You freak out because your inbox has become a disaster.
You try to get some work done and deal with the explosion of feature requests and attention.
You start reading online and notice people are saying some really shitty things about you.
At this point you can’t think about anything else except the shit that people are giving you.
You want to shut it all down and forget about it.
For anyone who thinks Marco thought this through or planned it in advance, you’re just deluding yourself. Who in the world could anticipate making a #1 rated app in the App Store. I don’t think any independent developer has ever done this outside the context of a game.
If I were Marco, I’d have been feeling dejected and depressed by this morning and would’ve wanted all of the attention to go away. No one person can deal with it alone.
In his post, Marco specifically pointed readers to instructions on how to get App Store refunds. I mean, people, if he was actually trying to scam you, don’t you think he’d just leave the app up on the store and just never say anything else about it? At least he’s being honest. If you already purchased the app, it’s not like you didn’t get anything in return.
You bought a working app. It still works. It will continue to work. There is no scam. Marco isn’t being an asshole. If he’s anything like me, he’s overwhelmed and just wants to get back to normal life. Being the center of crosshairs can really suck.
Be a little more empathetic. Thank Marco for being honest. Respect his decision. He’s just a person like you and me. I realize how easy is can be to forget that when everyone is just an avatar, but please take it to heart. I wish more people did when a similar thing happened to me 4 years ago.
We’re vacationing in Whistler, BC right now as “endurance spectators” to my father-in-law’s 3rd Ironman triathlon. Expecting some beautiful landscapes and weather, I brought my newly acquired X100T to take some nice photos.
Yesterday, I set it up on an interval timer and pointed it right towards Rainbow Mountain, which faces the patio in the kitchen of the little condo unit we’re renting out. After all was said and done, I ended up with 400 images depicting clouds moving over a mountain peak and not much idea of what to do with them. So, as any self-respecting engineer would, I set out to create a time-lapse using only my trusty command-line tools: FFmpeg and ImageMagick.
Let’s get down to it.
Note: Everything in this tutorial assumes that you have a current copy of ImageMagick and FFmpeg installed on your machine.
Even though I turned off RAW on the X100T, the images were still pretty huge (4896x3264). During my first tests, making movies from images this large gave really inconsistent results and took a long time to create, with not much extra benefit.
Therefore, the first thing you should probably do is check the size of your images and, if necessary, resize them to be a bit smaller so they will play more nicely with FFmpeg and any other image manipulation that you’re going to do.
Since I planned to upload my video to YouTube, I referenced a handy page they have that lists out their preferred resolutions, codecs, and formats for upload (https://support.google.com/youtube/answer/1722171?hl=en). If you’re like me, and you don’t care too much about maintaining the current aspect ratio, here’s what you can do. This will resize your images to a preferred resolution (in this case, 1280x720), and will potentially crop off the sides or top in the process. To start, make sure you’re in the directory with all of your photos.
$ for FILE in `ls *.JPG`; do \
mogrify -resize 1280x720^ -gravity center -crop 1280x720+0+0 +repage -write RESIZED_PHOTO_DIRECTORY/$FILE $FILE; \
In detail, this command -resizes photos to a 1280x720^ resolution (the caret means that the smaller of width and height is maintained and the larger one is kept even if the resolution is larger), and then, by using -gravity and centering, we crop the image to 1280x720 exactly, and write to RESIZED_PHOTO_DIRECTORY/$FILE. Phew, that was a mouthful.
If you just want to resize to a certain height/width and want to maintain the original resolution, just do this:
$ for FILE in `ls *.JPG`; do \
mogrify -resize 600x -write RESIZED_PHOTO_DIRECTORY/$FILE $FILE; \
Maintaining Color Distribution
Note: this step might not be necessary in your situation, but it greatly improved the quality of the final product for me. YMMV.
Sometimes images captured in a time lapse have very different histograms (especially if you have auto-aperture / shutter-speed enabled), and this can make things look “jumpy” from frame to frame. Obviously, this won’t look great in your final video, so we’re going to normalize the colors to a set distribution.
For an example, just compare the following two images (especially notice the trees, which are much lighter in the first example than the second):
Not ideal, right?
To help achieve this end, I used an ImageMagick script called histmatch, generously provided by Fred Weinhaus (link: http://www.fmwconcepts.com/imagemagick/histmatch/index.php). The idea to use a reference image to generate a histogram that we want all of the other images to match. Once you’ve decided on your reference image, run the following on every image except the reference image (otherwise the universe will explode).
(I just piped the output of ls *.JPG into a file called normalize.sh and used some of my Vim-fu to do this. Your process might be different.)
Finally, make the darned movie
This is the fun part. Just send the files through to FFmpeg and have it do its magic. If filenames are incrementally named, you’ll want to provide the parameters below (like -start_number and the _DSF%04d.JPG format) to make things match up.
This tells FFmpeg to take all of the JPEGs in the directory starting with _DSF and ending with 4 digits, and to output an h.264 video with the yuv420p colorspace to video.mp4. You now have a beautiful timelapse!