Kristaps Porzingis Was Ready To Scrap

The Knicks took on the Suns last night, and despite losing this matchup of dysfunctional defenses, Kristaps Porzingis posted 34 points (including a perfect 4-of-4 from the arc), eight rebounds, three blocks, three steals, and the feistiest moment of his brief career. In the third quarter, Porzingis stumbled over a defender and was intentionally flung to the ground by the Suns’ Marquese Chriss. He did not take that lightly:

Read more…

The Knicks took on the Suns last night, and despite losing this matchup of dysfunctional defenses, Kristaps Porzingis posted 34 points (including a perfect 4-of-4 from the arc), eight rebounds, three blocks, three steals, and the feistiest moment of his brief career. In the third quarter, Porzingis stumbled over a defender and was intentionally flung to the ground by the Suns’ Marquese Chriss. He did not take that lightly:

Read more…

The Case of the Main Website (Part 1)

Recently I was asked to join a committee at work regarding the creation of an intranet site.  This was an idea that had been smoldering for a little while around campus, but came to the fore with the redesign of the main website.  It’s an event I had seen before.

Previously it was a slightly older design with the page divided into sections as was the style at the time.  The new redesign is now more in vogue with current aesthetics, with a large background picture and different parts scrolling in the foreground over top.  It looks nice, it’s hip, it’s in-fashion, but it hasn’t proved all that functional for those of us already on campus.

This event begs the question:  Exactly who is the target audience for educational institution’s main website?  In times past, the root site was seen as a first stop for those on and off campus, whether they were affiliated or not with the institution.  It contained information for prospective students, parents, alumni, and the like, but also had links for current members of the community to internal resources.  It was one site to serve dual roles, but is that what it should be for?

With the rise of the Internet to something we all use and take for granted, the focus what a main website should be for has changed.  The root website has now essentially been turned over to the Marketing Department as a recruiting tool.  There may still be token links to internal resources and information, but the focus has now shifted sharply toward advertisement.  I don’t think this is a bad thing, but it has left out a sizable population.

I can remember the days–and maybe where you are they still do this–where every lab computer had its web browser homepage set to the root university website.  Everyone started their Internet adventure there.  With the handing over of the site to Marketing this is now a superfluous gesture that only adds more steps between the user and Google.  Hence the need for an intranet starting point.

Now comes the big question:  What goes on an intranet portal?

The Case of the Home Network (Part 2)

Last time I went through the journey of getting my house setup with a full network our dwelling.  When I left off, my new dual-band cable modem from Time Warner was sufficiently strong enough for a decent signal to reach from the basement on one end of the house to the second floor on the other.  Technically then, I could have stopped right there.  But, I have two Wireless N-class routers and another WRT54G.  I can’t just let them go to waste!

My first idea centered around sharing my iTunes library to the network, and, more importantly, to have it available to an AppleTV connected to the television.  My master iTunes library is hosted on a Mac Mini that uses my 32″ television for a monitor when needed.  Most of the time however, it boots at 5PM when I get home form work, shares my library with the house, then shuts itself down at 1AM after I’ve gone to sleep.  In the past I’ve tried to watch movies with it directly, and while it works it isn’t the ideal setup.

When my workplace rotated out some of our AppleTVs from deployment, I took one home and connected it to the network.  Adding this device made things easier to stream from my library.  However, I didn’t like that both the Mac Mini and the AppleTV sat next to the TV, literally one on top of the other, but had to communicate via the wireless router downstairs.  It seemed rather inefficient to have it setup like that, and also put two heavy bandwidth-using devices on the network.  I could make thing simpler.

Using one of the TP-LINK routers I connected both the Mac Mini and the AppleTV to the former device via ethernet, and disabled both of the latter devices wifi.  Now content could be streamed between them, and it never had to leave the router’s internal network.  This not only made media sharing quick, but it gave a nice signal boost to anyone sitting on the couch with their iPhone, iPad, or laptop.

The second TP-LINK ended up on the second floor in the master bedroom.  This soaked the entire second floor with a strong wireless N signal, boosting the modem in the basement.  Now every floor has it’s own wireless router blasting radio waves throughout the house.

As for the WRT54G, currently it isn’t deployed.  Actually right now it still sits, unplugged, in my wife’s office in case the wireless card in her computer suddenly fails.  It should be fine, but an open box item is still an open box item.  I want to give it just a little more time before I proclaim it perfect.

When the day does come I will likely redeploy the Linksys to the garage, and push with wireless signal outside the house for when I am working in the garage or yard.  I have an old MacBook that I keep in the garage to look up information when I’m working on one of our cars or something along those lines.  So a little signal boost out there couldn’t hurt.

I think now I’m finally happy with the way things stand with the network.  I have solid, fast signal throughout the house.  More importantly, my wife’s office is set and working as it should be.  The question is, can I leave things alone for a little while…

The Case of the Home Network (Part 1)

When purchasing our current home several years ago, one of the requirements–besides having enough bedrooms– was the need for a home office for my wife.  She telecommutes, so a dedicated home office for her work was a requirement.  In addition, we would need a fast, stable home Internet connection.

The best option we had available at the time was Time Warner Cable.  We signed up for their 20Mb service, and I installed all the equipment.  The cable modem ended up in the basement on one end of the house.  My wife’s office ended up on the second floor on the opposite end.  That’s quite a run for the wifi when it has to pass through ceilings/floors with water pipes hidden in the walls.

This was compounded by the relatively weak wireless transmitter inside the cable modem.  The wireless-G cards I had weren’t able to get enough signal to make a reliable, fast connection as they should.  My solution was to re-appropriate Old Reliable:  A Linksys WRT54G router.

After upgrading the WRT54G to the latest DD-WRT firmware, I was able to have the router join the network with my wife’s work desktop plugged in to an ethernet port.  The added signal boost of the Linksys made for a stable connection that worked well enough.  This is how the home network was configured and run for several years.

With this setup, my house had decent enough 54G speeds.  However, as more new devices entered the house–most notable increasingly faster iPads and a couple upgrades worth of iPhones–I couldn’t take advantage.  It seemed clear, as a nerd, that something had to be done.

To fix this issue I acquired a pair of TP-LINK N600 routers.  Capable of N-class speeds, these devices were destined to upgrade my network and provide better coverage all over the house.  Instead what they wrought was a plethora of headaches as I tried to make everything talk to each other properly.

With the addition of the routers I was able to finally address the issue of a weak wifi signal.  Turning off the transmitter in Time Warner’s modem and replacing it with the much stronger antennas of the TP-LINK unit in the basement, as well as setting a second TP-LINK on the second floor, meant the signal was good and strong between both.  It was a better setup overall, but still not perfect.

One of the first things I did upon receiving the routers was to load a custom firmware on them.  As I mentioned, on my Linksys was DD-WRT, but in reading it seemed the consensus for the TP-LINKs was OpenWRT.  Having never used OpenWRT, I decided to give it a try.

The immediate issue I ran into was that out of the box, OpenWRT doesn’t have an option to act as a client bridge.  In this mode, the router joins the existing network as a client, and passes all of the network information (DHCP requests, network sharing protocols, etc.) on to the clients—in particular the clients connected to the router via the ethernet ports, as my wife’s work computer used.  Although her computer could connect to the Internet, the lack of a client bridge mode meant that it was isolated.  Things such as its printer couldn’t be shared, and any shares I had available on other devices were invisible to her computer.

There was another complicating factor in all of this tinkering and configuring of the wireless network:  My wife still had to work.  As part of her job she facilitates webinars, so her connection has to be stable.  I was essentially doing testing in production, and failure or instability was not an option.  For this reason, once I had her computer online with a good connection out to the wider world I was hesitant to mess with it.

The impetus to change things came a couple months ago when TWC sent me a letter.  In an effort to compete with newer Internet offerings from other companies they were upgrading all of their plans.  My current speed of 30Mb was about to jump to 200Mb!  As part of this upgrade we would be getting a new modem with dual antennas that promised to have a significantly stronger signal.  Once it was configured and connected this proved to be the case, with the wifi able to penetrate the rest of the house in every corner.

Solving that problem left me with another, more pleasant issue to deal with:  What to do with the TP-LINK router that served as the primary wifi transmitter for the cable modem?  I had an extra router that wasn’t necessarily needed anymore.

The first thing I decided to do with it was revisit the firmware.  In the time since I had first purchased the routers, a custom DD-WRT firmware was created for the TP-LINK (listed in the DD-WRT router database as a WDR3600).  Once I managed to get OpenWRT off, the original factory firmware back on, and then load DD-WRT, things began to fall into place.

With the router in client bridge mode once again, my wife’s desktop was was now able to see the wider network.  The printer was available, shares were visible, and the signal back down to the basement was strong and reliable.  The network was back in good order, as it should have been in the first place.

The final piece of this puzzle fell into place as I was browsing around NewEgg.  On a whim I looked at their desktop wireless cards, and lo and behold they had an open box, triple-antenna Rosewill wireless card for dirt cheap.  I decided to take a flier on it, and ordered.

With the addition of that card, the client bridged router in her office was no longer needed.  The desktop became just another client on the network, the printer was plugged in via USB, and then it was shared out to the rest of the house.  So now, I had two superfluous routers.  Which gave me the opportunity to soak the entire house in wifi…

The Case of Knowing When to Upgrade

Although this may date me, I think it’s fair to say that I am an “old school” first-person-shooter gamer.  I started my FPS days with DOOM, DOOM][ (notice how I wrote that?), moved on to the QUAKE franchise, etc.  I played them all.

For years my computer refresh and update cycle was dictated by id Software release schedules.  I built a Pentium Pro computer for QUAKE, followed by a Pentium II desktop for QUAKEII, continued on with that rig through QUAKEIIIArena, built a new computer for DOOM3, and finally constructed a new machine for RAGE.  I can still remember the first time I fired up a pair of Voodoo II-based 8MB Diamond Monster II video cards (SLI 4EVER!), and the effect that made on QUAKEII.  That was a legitimately draw-dropping moment for me as I was using the software renderer up until that point.

My current computer is now five years old, and was built to play RAGE.  It’s an unlocked i7, 16GB of RAM (upgraded from the original 8GB), SSD primary drive, and an eVGA 4GB GeForce 580.  It played RAGE flawlessly, maintaining a solid 60 frames-per-second rate even when forcing higher resolution textures.  RAGE remains the most fluid and beautifully animated FPS I’ve ever played.

But now there is a new id Software game out.  Although when I say “id Software” I should clarify:  It’s not the id Software I grew up with.  Romero is long gone, and now Carmack has left as well.  id is now part of Bethesda Software, and is no longer the solo video game company run by the Ferrari-driving nerds I followed religiously when I was younger.

So when I saw the announcement that DOOM was being resurrected, I was skeptical.  Given that I’m older now, with much less time to throw down multiple hours of gaming in a row, I don’t really buy games first-run anymore.  The last two games I purchased first-day and full-priced were RAGE, and South Park: The Stick of Truth (which is a perfect South Park game, and the one we’ve secretly always wanted).  Other than that I let new releases slide, wait until all the downloadable content is out, then pick up a package deal on the cheap with a Steam sale.  My strategy would be the same for the new DOOM.

Plus there was another sneaking suspicion I had:  That DOOM was going to suck.  I may be the only person on Planet Earth who actually enjoyed DOOM3 and RAGE, but I had no faith that a new team could recapture the magic.  So when the reviews started to come in, that not only was DOOM good, but it was really, really good, I was shocked.

All of that however, was really just a cover story I told myself to avoid a painful truth:  That my five-year-old computer was not going to be able to handle playing DOOM.  Looking at the minimum specs, my computer didn’t meet them.  I felt sure that if I even got it to run, I’d have to turn down the quality and resolution so low that it would cheapen the game.  I had to do this to an extent back during the time of QUAKEIIIArena, when my beloved Voodoo II cards started to have issues at higher quality settings.

Then however, an announcement came out a couple weeks ago that you could install the first level of DOOM via Steam for a week to try it out for free.  Since it was free, I decided to give it a whirl.  I “purchased” the demo from the Steam Store, and installed it on my five-year old computer.  I readied myself for the horror of my little chitty chitty bang bang try to deal with a current-generation FPS.

As I booted the game the first time, I felt my initial hesitation was being confirmed.  The game was taking a long time to start up–way longer than it should.  I prepared myself to play a 10FPS slideshow of a game, and see how badly I had to mangle the settings to get it playable–if that was even possible.

Once the menu loaded, I set the resolution for my monitor, and jumped right in to start playing.  And I was actually playing.  No slideshow.  The animation was fluid.  The graphics looked good.  I was shocked.

So I dove into the advanced settings to see what my settings were.  I cranked them up.  Ultra-everything!  Full resolution!  And again…no problem.  I was playing the game without any issue or slowdown.

I went back in to the settings one final time and turned on the real-time frames-per-second meter to show me exactly what I was getting.  My number never dropped below 35FPS.  The game was not just playable, but worked great and looked good doing it.  Again…shocked!

Now I’m left with a conundrum.  My inner-nerd is desperately wanting me to plunk down $60 to start killing imps.  My more logical side however, continues its council of patience to wait for the price to drop and the DLC content to become available.  So far, the latter is winning.

This entire event is really a perfect example of a phenomenon that many in the tech industry have talked about:  Computers are lasting longer.  the usable lifespan of a computer bought today is significantly greater than that of one bought at an earlier time in my career.  Three years used to be the norm, but now things are stretching to four years and beyond.

We’re starting to see this now in the rumors of the new iPhone release cycle.  My first iPhone was a 3G.  I kept that until I bought my iPhone 5.  By the end, the 3G wasn’t actually usable as a smartphone.  Opening an app like Facebook took minutes.  I promised myself after that experience I wouldn’t do that again, but would instead go every-other-year.  I was true to my word purchasing my 6, and now look at the horizon for the 7 later this year (although what I’m going to do there is a whole other post…).

When technology is new, this rapid progress is understandable.  I get it with regards to computers during the early years of wide acceptance and the growth of the Internet.  I understand what’s happening with the emergence of smartphones and tablets.  However, I didn’t think this process had made it to video gaming rigs.

In the past, gaming rigs were always different.  Almost always home-built, with the absolute latest components, gaming rigs were several steps above your normal, home desktop.  Every six months new video cards, faster RAM, or larger hard drives would come out.  There was a constant nerdy pressure to upgrade.

This phenomenon was fueled by video game makers and reviewers.  In a non-stop effort to make their games prettier, more powerful, more immersive, and more fun, programmers constantly upgraded their own internal computer hardware to see how much further they could push new gear.  Review sites followed suit, and always showed off high-resolution screenshots, ultra-quality videos of gaming footage, and every-increasing benchmark scores.  It was a never-ending arms race in pursuit of a few more frames-per-second.

In some ways this is still going on.  Reading pages like this, comparing GeForce video card generations, the nerd part of me gets the itch that says, “Upgrade that video card at least!”  Then however, I remember that I just played a brand-new DOOM on five-year-old hardware, at high quality, without issue.

So it seems the march of progress has slowed down enough to even effect the home-built gaming rig.  The nice thing is that for now, at least, I can hold off raiding NewEgg for the latest gear.  Knowing that I can run DOOM now, and confident I’ll be able to handle the next South Park game when it comes out at the end of the year, I don’t feel compelled to drop $1,500-$2,000 for a completely new system.

My only real fear at this point is component failure.  My desktop has been running solid for five years.  Even when it would sit idle it’s running SETI@Home, so it’s doing something every minute of every hour of every day.  That’s a lot of wear and tear on computer equipment in a dusty basement.  As long as I can still get and install replacement parts however, I’m willing to run that risk.

 

The Case of Being Mostly Cloudy, Continued

A couple posts ago I wrote about my struggles to make some sense of all the cloud storage I have, and what to do with the spaces that are scattered around the Internet.  As I’ve been working through this I’ve started to make some final decisions.  I’ve upgraded a couple things now that should bring some stability for at least the next year.

My motivation for doing all this, and my hesitation and nervousness as I’ve worked my way through this process, has been my pictures.  I am incredibly paranoid about losing the photos I have on my phone of my children, in particular.  One of the major reasons I purchased my 128GB iPhone 6, other than to be able to store every single MP3 I have locally on the device, is to make sure I have room for my pictures.  All my pictures.

I have pictures of my children on my phone that were taken with an iPhone 3G.  I like having those photos there on my phone always accessible.  I have them backed up in several different places, both locally on my hard drives at home (both internal and external to my desktop) as well as in several different cloud locations.  I do not want to ever lose any of them, and I take those insane steps to ensure that doesn’t happen.

The root of this problem is that I have not been able to fully trust iCloud and how Apple treats pictures.  For example, Apple seemingly arbitrarily separates out photos on the Camera Roll from other types of pictures.  Why?  If I take a picture it should go with all the other pictures on my phone.  One spot.  Easy.  The iPhone has never done this–or at least done it very well.

Apple is trying though–at least I think so.  Enabling the iCloud Photo Library is a step to make sure that all pictures go into one place, and can be accessible anywhere.  But again, when I went to turn on the Photo Library on my iPhone I was presented with a message that doing so would remove 3,000+ photos and videos from my phone.  What?  Why?  If I turn on the iCloud Photo Library it should automatically take every picture and video on my phone that isn’t stored on iCloud and upload it.  There is no reason for this.  This is also not the first time changing a setting on my iPhone has hit me with a warning such as this.

That is a perfect example of why I haven’t been able to fully trust Apple and iCloud with the well-being of my pictures and videos.  Because of this, I took Microsoft up on their offer of a free year of Office365, which includes 1TB of storage space on OneDrive.  It’s ironic the one service I’ve come to trust to securely store photos and videos from my iPhone is a Microsoft app.

Since signing up with Office365/OneDrive for a year, I triple-checked that every photo, video, slo-mo, burst, and panoramic image on my phone was backed up in several different locations including fully imported in to Apple’s Photos program on my MacBook, and then copying all of the files out of Photos and into a separate folder elsewhere on the Mac’s hard drive. Then…reluctantly…hesitantly…with both fingers crossed…I took the plunge, and enabled the iCloud Photo Library warning and all.

As it stands now, a couple days after agreeing to have a few thousand image and video files wiped off my phone…things seem to be OK.  Everything that was in Photos now shows up in iCloud and on my iPhone.  I’ve also added many more pictures taken with a digital camera dating back to 2006 to the iCloud Photo Library, and those images are now available on every device I have connected to iCloud as well.

So now, after all of that (largely self-inflicted) grief and consternation, I return to my original premise of what to do with all of the cloud storage options I have.  Here’s what I’ve come up with:

  • iCloud: Primary home for all digital photos and videos, and Apple device backups
    • I currently have more than half of my 50GB allotment free, so this space should last a while.
  • OneDrive: Primary photo and video backup
    • The one issue with iCloud I want to guard against is a mistaken edit or file deletion cascading across all of my synced devices I before I could stop it.  OneDrive will serve that role for now.  I’ll be revisiting this decision in a year’s time when Microsoft wants $70 to continue my subscription.
  • Google Drive: Personal file sharing across all my devices
    •  Since I use Chrome as my primary browser this will be the easiest place to transfer personal files around to each device.
  • Box:  Work file sharing
    • My institution has an enterprise license for Box, so I’ll store work-related files and documents in it.  This will let me be more mobile in where I work, and make sure that my personal and work files don’t become intermingled.  My personal Box account will now lay fallow.
  • Dropbox:  Abandoned
    • I’ll keep my account, and likely even keep the sync client installed on my desktop at home.  However, with it’s comparatively tiny space allocation verses the other options I have available it will likely stay unused and empty.

We’ll see how long this lasts.

The Case of Cleaning SCCM

I ran into a problem over the last couple of days that I hadn’t encountered before: a System Center Configuration Manager client that wouldn’t talk to the management server, and wouldn’t uninstall.  This was a Windows 10 laptop, which may have contributed to the issue.  In the end I had to do some googlesleuthing to discover the answer, which required me to go full-on manual uninstall including removing registry keys to actually get the client out.

First I tried the standard ccmsetup.exe /uninstall which did exactly nothing.  I tried both regular and administrator command prompts, but to no avail.  My next step was to use the ccmclean.exe utility that was supposed to rip out any SCCM client install.  For me and the Win10 laptop however, it didn’t get everything.

My final step was to do the manual uninstall.  This requires the following which I found here:

  1. Removing these services (neither of these were present on the Win10 laptop that I could find):
    1. SMS Agent Host Service
    2. CCMSetup Service
  2. Removing the following directories:
    1. %windir%\ccm
    2. %windir%\ccmsetup
    3. %windir%\ccmcache
  3. Removing the following files:
    1. %windir%\smscfg.ini
    2. %windir%\sms*.mif
  4. Removing these registry keys:
    1. HKLM\Software\Microsoft\ccm
    2. HKLM\Software\Microsoft\CCMSETUP
    3. HKLM\Software\Microsoft\SMS
  5. Removing these WMI namespaces:
    1. Root\CIMV2\SMS
    2. Root\CCM

The last step presented a problem as the tools to edit WMI didn’t work on the Win10 computer.  I found some PowerShell commands in the comments section of this page that are supposed to do this instead:

  • get-wmiobject -computername [COMP] -query “SELECT * FROM _Namespace WHERE Name=’CCM'” -Namespace “root” | Remove-WmiObject
  • get-wmiobject -query “SELECT * FROM _Namespace WHERE Name=’sms'” -Namespace “root\cimv2” | Remove-WmiObject

Despite trying them both verbatim, fiddling with syntax, and rewording both as older-style PowerShell code, I couldn’t get either of those to work.  Regardless, once I went through all those steps I was able to reinstall the SCCM client, and it now seems to work correctly.