The Case of the Main Website (Part 1)

Recently I was asked to join a committee at work regarding the creation of an intranet site.  This was an idea that had been smoldering for a little while around campus, but came to the fore with the redesign of the main website.  It’s an event I had seen before.

Previously it was a slightly older design with the page divided into sections as was the style at the time.  The new redesign is now more in vogue with current aesthetics, with a large background picture and different parts scrolling in the foreground over top.  It looks nice, it’s hip, it’s in-fashion, but it hasn’t proved all that functional for those of us already on campus.

This event begs the question:  Exactly who is the target audience for educational institution’s main website?  In times past, the root site was seen as a first stop for those on and off campus, whether they were affiliated or not with the institution.  It contained information for prospective students, parents, alumni, and the like, but also had links for current members of the community to internal resources.  It was one site to serve dual roles, but is that what it should be for?

With the rise of the Internet to something we all use and take for granted, the focus what a main website should be for has changed.  The root website has now essentially been turned over to the Marketing Department as a recruiting tool.  There may still be token links to internal resources and information, but the focus has now shifted sharply toward advertisement.  I don’t think this is a bad thing, but it has left out a sizable population.

I can remember the days–and maybe where you are they still do this–where every lab computer had its web browser homepage set to the root university website.  Everyone started their Internet adventure there.  With the handing over of the site to Marketing this is now a superfluous gesture that only adds more steps between the user and Google.  Hence the need for an intranet starting point.

Now comes the big question:  What goes on an intranet portal?

The Case of the Home Network (Part 2)

Last time I went through the journey of getting my house setup with a full network our dwelling.  When I left off, my new dual-band cable modem from Time Warner was sufficiently strong enough for a decent signal to reach from the basement on one end of the house to the second floor on the other.  Technically then, I could have stopped right there.  But, I have two Wireless N-class routers and another WRT54G.  I can’t just let them go to waste!

My first idea centered around sharing my iTunes library to the network, and, more importantly, to have it available to an AppleTV connected to the television.  My master iTunes library is hosted on a Mac Mini that uses my 32″ television for a monitor when needed.  Most of the time however, it boots at 5PM when I get home form work, shares my library with the house, then shuts itself down at 1AM after I’ve gone to sleep.  In the past I’ve tried to watch movies with it directly, and while it works it isn’t the ideal setup.

When my workplace rotated out some of our AppleTVs from deployment, I took one home and connected it to the network.  Adding this device made things easier to stream from my library.  However, I didn’t like that both the Mac Mini and the AppleTV sat next to the TV, literally one on top of the other, but had to communicate via the wireless router downstairs.  It seemed rather inefficient to have it setup like that, and also put two heavy bandwidth-using devices on the network.  I could make thing simpler.

Using one of the TP-LINK routers I connected both the Mac Mini and the AppleTV to the former device via ethernet, and disabled both of the latter devices wifi.  Now content could be streamed between them, and it never had to leave the router’s internal network.  This not only made media sharing quick, but it gave a nice signal boost to anyone sitting on the couch with their iPhone, iPad, or laptop.

The second TP-LINK ended up on the second floor in the master bedroom.  This soaked the entire second floor with a strong wireless N signal, boosting the modem in the basement.  Now every floor has it’s own wireless router blasting radio waves throughout the house.

As for the WRT54G, currently it isn’t deployed.  Actually right now it still sits, unplugged, in my wife’s office in case the wireless card in her computer suddenly fails.  It should be fine, but an open box item is still an open box item.  I want to give it just a little more time before I proclaim it perfect.

When the day does come I will likely redeploy the Linksys to the garage, and push with wireless signal outside the house for when I am working in the garage or yard.  I have an old MacBook that I keep in the garage to look up information when I’m working on one of our cars or something along those lines.  So a little signal boost out there couldn’t hurt.

I think now I’m finally happy with the way things stand with the network.  I have solid, fast signal throughout the house.  More importantly, my wife’s office is set and working as it should be.  The question is, can I leave things alone for a little while…

The Case of the Home Network (Part 1)

When purchasing our current home several years ago, one of the requirements–besides having enough bedrooms– was the need for a home office for my wife.  She telecommutes, so a dedicated home office for her work was a requirement.  In addition, we would need a fast, stable home Internet connection.

The best option we had available at the time was Time Warner Cable.  We signed up for their 20Mb service, and I installed all the equipment.  The cable modem ended up in the basement on one end of the house.  My wife’s office ended up on the second floor on the opposite end.  That’s quite a run for the wifi when it has to pass through ceilings/floors with water pipes hidden in the walls.

This was compounded by the relatively weak wireless transmitter inside the cable modem.  The wireless-G cards I had weren’t able to get enough signal to make a reliable, fast connection as they should.  My solution was to re-appropriate Old Reliable:  A Linksys WRT54G router.

After upgrading the WRT54G to the latest DD-WRT firmware, I was able to have the router join the network with my wife’s work desktop plugged in to an ethernet port.  The added signal boost of the Linksys made for a stable connection that worked well enough.  This is how the home network was configured and run for several years.

With this setup, my house had decent enough 54G speeds.  However, as more new devices entered the house–most notable increasingly faster iPads and a couple upgrades worth of iPhones–I couldn’t take advantage.  It seemed clear, as a nerd, that something had to be done.

To fix this issue I acquired a pair of TP-LINK N600 routers.  Capable of N-class speeds, these devices were destined to upgrade my network and provide better coverage all over the house.  Instead what they wrought was a plethora of headaches as I tried to make everything talk to each other properly.

With the addition of the routers I was able to finally address the issue of a weak wifi signal.  Turning off the transmitter in Time Warner’s modem and replacing it with the much stronger antennas of the TP-LINK unit in the basement, as well as setting a second TP-LINK on the second floor, meant the signal was good and strong between both.  It was a better setup overall, but still not perfect.

One of the first things I did upon receiving the routers was to load a custom firmware on them.  As I mentioned, on my Linksys was DD-WRT, but in reading it seemed the consensus for the TP-LINKs was OpenWRT.  Having never used OpenWRT, I decided to give it a try.

The immediate issue I ran into was that out of the box, OpenWRT doesn’t have an option to act as a client bridge.  In this mode, the router joins the existing network as a client, and passes all of the network information (DHCP requests, network sharing protocols, etc.) on to the clients—in particular the clients connected to the router via the ethernet ports, as my wife’s work computer used.  Although her computer could connect to the Internet, the lack of a client bridge mode meant that it was isolated.  Things such as its printer couldn’t be shared, and any shares I had available on other devices were invisible to her computer.

There was another complicating factor in all of this tinkering and configuring of the wireless network:  My wife still had to work.  As part of her job she facilitates webinars, so her connection has to be stable.  I was essentially doing testing in production, and failure or instability was not an option.  For this reason, once I had her computer online with a good connection out to the wider world I was hesitant to mess with it.

The impetus to change things came a couple months ago when TWC sent me a letter.  In an effort to compete with newer Internet offerings from other companies they were upgrading all of their plans.  My current speed of 30Mb was about to jump to 200Mb!  As part of this upgrade we would be getting a new modem with dual antennas that promised to have a significantly stronger signal.  Once it was configured and connected this proved to be the case, with the wifi able to penetrate the rest of the house in every corner.

Solving that problem left me with another, more pleasant issue to deal with:  What to do with the TP-LINK router that served as the primary wifi transmitter for the cable modem?  I had an extra router that wasn’t necessarily needed anymore.

The first thing I decided to do with it was revisit the firmware.  In the time since I had first purchased the routers, a custom DD-WRT firmware was created for the TP-LINK (listed in the DD-WRT router database as a WDR3600).  Once I managed to get OpenWRT off, the original factory firmware back on, and then load DD-WRT, things began to fall into place.

With the router in client bridge mode once again, my wife’s desktop was was now able to see the wider network.  The printer was available, shares were visible, and the signal back down to the basement was strong and reliable.  The network was back in good order, as it should have been in the first place.

The final piece of this puzzle fell into place as I was browsing around NewEgg.  On a whim I looked at their desktop wireless cards, and lo and behold they had an open box, triple-antenna Rosewill wireless card for dirt cheap.  I decided to take a flier on it, and ordered.

With the addition of that card, the client bridged router in her office was no longer needed.  The desktop became just another client on the network, the printer was plugged in via USB, and then it was shared out to the rest of the house.  So now, I had two superfluous routers.  Which gave me the opportunity to soak the entire house in wifi…

The Case of Knowing When to Upgrade

Although this may date me, I think it’s fair to say that I am an “old school” first-person-shooter gamer.  I started my FPS days with DOOM, DOOM][ (notice how I wrote that?), moved on to the QUAKE franchise, etc.  I played them all.

For years my computer refresh and update cycle was dictated by id Software release schedules.  I built a Pentium Pro computer for QUAKE, followed by a Pentium II desktop for QUAKEII, continued on with that rig through QUAKEIIIArena, built a new computer for DOOM3, and finally constructed a new machine for RAGE.  I can still remember the first time I fired up a pair of Voodoo II-based 8MB Diamond Monster II video cards (SLI 4EVER!), and the effect that made on QUAKEII.  That was a legitimately draw-dropping moment for me as I was using the software renderer up until that point.

My current computer is now five years old, and was built to play RAGE.  It’s an unlocked i7, 16GB of RAM (upgraded from the original 8GB), SSD primary drive, and an eVGA 4GB GeForce 580.  It played RAGE flawlessly, maintaining a solid 60 frames-per-second rate even when forcing higher resolution textures.  RAGE remains the most fluid and beautifully animated FPS I’ve ever played.

But now there is a new id Software game out.  Although when I say “id Software” I should clarify:  It’s not the id Software I grew up with.  Romero is long gone, and now Carmack has left as well.  id is now part of Bethesda Software, and is no longer the solo video game company run by the Ferrari-driving nerds I followed religiously when I was younger.

So when I saw the announcement that DOOM was being resurrected, I was skeptical.  Given that I’m older now, with much less time to throw down multiple hours of gaming in a row, I don’t really buy games first-run anymore.  The last two games I purchased first-day and full-priced were RAGE, and South Park: The Stick of Truth (which is a perfect South Park game, and the one we’ve secretly always wanted).  Other than that I let new releases slide, wait until all the downloadable content is out, then pick up a package deal on the cheap with a Steam sale.  My strategy would be the same for the new DOOM.

Plus there was another sneaking suspicion I had:  That DOOM was going to suck.  I may be the only person on Planet Earth who actually enjoyed DOOM3 and RAGE, but I had no faith that a new team could recapture the magic.  So when the reviews started to come in, that not only was DOOM good, but it was really, really good, I was shocked.

All of that however, was really just a cover story I told myself to avoid a painful truth:  That my five-year-old computer was not going to be able to handle playing DOOM.  Looking at the minimum specs, my computer didn’t meet them.  I felt sure that if I even got it to run, I’d have to turn down the quality and resolution so low that it would cheapen the game.  I had to do this to an extent back during the time of QUAKEIIIArena, when my beloved Voodoo II cards started to have issues at higher quality settings.

Then however, an announcement came out a couple weeks ago that you could install the first level of DOOM via Steam for a week to try it out for free.  Since it was free, I decided to give it a whirl.  I “purchased” the demo from the Steam Store, and installed it on my five-year old computer.  I readied myself for the horror of my little chitty chitty bang bang try to deal with a current-generation FPS.

As I booted the game the first time, I felt my initial hesitation was being confirmed.  The game was taking a long time to start up–way longer than it should.  I prepared myself to play a 10FPS slideshow of a game, and see how badly I had to mangle the settings to get it playable–if that was even possible.

Once the menu loaded, I set the resolution for my monitor, and jumped right in to start playing.  And I was actually playing.  No slideshow.  The animation was fluid.  The graphics looked good.  I was shocked.

So I dove into the advanced settings to see what my settings were.  I cranked them up.  Ultra-everything!  Full resolution!  And again…no problem.  I was playing the game without any issue or slowdown.

I went back in to the settings one final time and turned on the real-time frames-per-second meter to show me exactly what I was getting.  My number never dropped below 35FPS.  The game was not just playable, but worked great and looked good doing it.  Again…shocked!

Now I’m left with a conundrum.  My inner-nerd is desperately wanting me to plunk down $60 to start killing imps.  My more logical side however, continues its council of patience to wait for the price to drop and the DLC content to become available.  So far, the latter is winning.

This entire event is really a perfect example of a phenomenon that many in the tech industry have talked about:  Computers are lasting longer.  the usable lifespan of a computer bought today is significantly greater than that of one bought at an earlier time in my career.  Three years used to be the norm, but now things are stretching to four years and beyond.

We’re starting to see this now in the rumors of the new iPhone release cycle.  My first iPhone was a 3G.  I kept that until I bought my iPhone 5.  By the end, the 3G wasn’t actually usable as a smartphone.  Opening an app like Facebook took minutes.  I promised myself after that experience I wouldn’t do that again, but would instead go every-other-year.  I was true to my word purchasing my 6, and now look at the horizon for the 7 later this year (although what I’m going to do there is a whole other post…).

When technology is new, this rapid progress is understandable.  I get it with regards to computers during the early years of wide acceptance and the growth of the Internet.  I understand what’s happening with the emergence of smartphones and tablets.  However, I didn’t think this process had made it to video gaming rigs.

In the past, gaming rigs were always different.  Almost always home-built, with the absolute latest components, gaming rigs were several steps above your normal, home desktop.  Every six months new video cards, faster RAM, or larger hard drives would come out.  There was a constant nerdy pressure to upgrade.

This phenomenon was fueled by video game makers and reviewers.  In a non-stop effort to make their games prettier, more powerful, more immersive, and more fun, programmers constantly upgraded their own internal computer hardware to see how much further they could push new gear.  Review sites followed suit, and always showed off high-resolution screenshots, ultra-quality videos of gaming footage, and every-increasing benchmark scores.  It was a never-ending arms race in pursuit of a few more frames-per-second.

In some ways this is still going on.  Reading pages like this, comparing GeForce video card generations, the nerd part of me gets the itch that says, “Upgrade that video card at least!”  Then however, I remember that I just played a brand-new DOOM on five-year-old hardware, at high quality, without issue.

So it seems the march of progress has slowed down enough to even effect the home-built gaming rig.  The nice thing is that for now, at least, I can hold off raiding NewEgg for the latest gear.  Knowing that I can run DOOM now, and confident I’ll be able to handle the next South Park game when it comes out at the end of the year, I don’t feel compelled to drop $1,500-$2,000 for a completely new system.

My only real fear at this point is component failure.  My desktop has been running solid for five years.  Even when it would sit idle it’s running SETI@Home, so it’s doing something every minute of every hour of every day.  That’s a lot of wear and tear on computer equipment in a dusty basement.  As long as I can still get and install replacement parts however, I’m willing to run that risk.

 

The Case of Being Mostly Cloudy, Continued

A couple posts ago I wrote about my struggles to make some sense of all the cloud storage I have, and what to do with the spaces that are scattered around the Internet.  As I’ve been working through this I’ve started to make some final decisions.  I’ve upgraded a couple things now that should bring some stability for at least the next year.

My motivation for doing all this, and my hesitation and nervousness as I’ve worked my way through this process, has been my pictures.  I am incredibly paranoid about losing the photos I have on my phone of my children, in particular.  One of the major reasons I purchased my 128GB iPhone 6, other than to be able to store every single MP3 I have locally on the device, is to make sure I have room for my pictures.  All my pictures.

I have pictures of my children on my phone that were taken with an iPhone 3G.  I like having those photos there on my phone always accessible.  I have them backed up in several different places, both locally on my hard drives at home (both internal and external to my desktop) as well as in several different cloud locations.  I do not want to ever lose any of them, and I take those insane steps to ensure that doesn’t happen.

The root of this problem is that I have not been able to fully trust iCloud and how Apple treats pictures.  For example, Apple seemingly arbitrarily separates out photos on the Camera Roll from other types of pictures.  Why?  If I take a picture it should go with all the other pictures on my phone.  One spot.  Easy.  The iPhone has never done this–or at least done it very well.

Apple is trying though–at least I think so.  Enabling the iCloud Photo Library is a step to make sure that all pictures go into one place, and can be accessible anywhere.  But again, when I went to turn on the Photo Library on my iPhone I was presented with a message that doing so would remove 3,000+ photos and videos from my phone.  What?  Why?  If I turn on the iCloud Photo Library it should automatically take every picture and video on my phone that isn’t stored on iCloud and upload it.  There is no reason for this.  This is also not the first time changing a setting on my iPhone has hit me with a warning such as this.

That is a perfect example of why I haven’t been able to fully trust Apple and iCloud with the well-being of my pictures and videos.  Because of this, I took Microsoft up on their offer of a free year of Office365, which includes 1TB of storage space on OneDrive.  It’s ironic the one service I’ve come to trust to securely store photos and videos from my iPhone is a Microsoft app.

Since signing up with Office365/OneDrive for a year, I triple-checked that every photo, video, slo-mo, burst, and panoramic image on my phone was backed up in several different locations including fully imported in to Apple’s Photos program on my MacBook, and then copying all of the files out of Photos and into a separate folder elsewhere on the Mac’s hard drive. Then…reluctantly…hesitantly…with both fingers crossed…I took the plunge, and enabled the iCloud Photo Library warning and all.

As it stands now, a couple days after agreeing to have a few thousand image and video files wiped off my phone…things seem to be OK.  Everything that was in Photos now shows up in iCloud and on my iPhone.  I’ve also added many more pictures taken with a digital camera dating back to 2006 to the iCloud Photo Library, and those images are now available on every device I have connected to iCloud as well.

So now, after all of that (largely self-inflicted) grief and consternation, I return to my original premise of what to do with all of the cloud storage options I have.  Here’s what I’ve come up with:

  • iCloud: Primary home for all digital photos and videos, and Apple device backups
    • I currently have more than half of my 50GB allotment free, so this space should last a while.
  • OneDrive: Primary photo and video backup
    • The one issue with iCloud I want to guard against is a mistaken edit or file deletion cascading across all of my synced devices I before I could stop it.  OneDrive will serve that role for now.  I’ll be revisiting this decision in a year’s time when Microsoft wants $70 to continue my subscription.
  • Google Drive: Personal file sharing across all my devices
    •  Since I use Chrome as my primary browser this will be the easiest place to transfer personal files around to each device.
  • Box:  Work file sharing
    • My institution has an enterprise license for Box, so I’ll store work-related files and documents in it.  This will let me be more mobile in where I work, and make sure that my personal and work files don’t become intermingled.  My personal Box account will now lay fallow.
  • Dropbox:  Abandoned
    • I’ll keep my account, and likely even keep the sync client installed on my desktop at home.  However, with it’s comparatively tiny space allocation verses the other options I have available it will likely stay unused and empty.

We’ll see how long this lasts.

The Case of Cleaning SCCM

I ran into a problem over the last couple of days that I hadn’t encountered before: a System Center Configuration Manager client that wouldn’t talk to the management server, and wouldn’t uninstall.  This was a Windows 10 laptop, which may have contributed to the issue.  In the end I had to do some googlesleuthing to discover the answer, which required me to go full-on manual uninstall including removing registry keys to actually get the client out.

First I tried the standard ccmsetup.exe /uninstall which did exactly nothing.  I tried both regular and administrator command prompts, but to no avail.  My next step was to use the ccmclean.exe utility that was supposed to rip out any SCCM client install.  For me and the Win10 laptop however, it didn’t get everything.

My final step was to do the manual uninstall.  This requires the following which I found here:

  1. Removing these services (neither of these were present on the Win10 laptop that I could find):
    1. SMS Agent Host Service
    2. CCMSetup Service
  2. Removing the following directories:
    1. %windir%\ccm
    2. %windir%\ccmsetup
    3. %windir%\ccmcache
  3. Removing the following files:
    1. %windir%\smscfg.ini
    2. %windir%\sms*.mif
  4. Removing these registry keys:
    1. HKLM\Software\Microsoft\ccm
    2. HKLM\Software\Microsoft\CCMSETUP
    3. HKLM\Software\Microsoft\SMS
  5. Removing these WMI namespaces:
    1. Root\CIMV2\SMS
    2. Root\CCM

The last step presented a problem as the tools to edit WMI didn’t work on the Win10 computer.  I found some PowerShell commands in the comments section of this page that are supposed to do this instead:

  • get-wmiobject -computername [COMP] -query “SELECT * FROM _Namespace WHERE Name=’CCM'” -Namespace “root” | Remove-WmiObject
  • get-wmiobject -query “SELECT * FROM _Namespace WHERE Name=’sms'” -Namespace “root\cimv2” | Remove-WmiObject

Despite trying them both verbatim, fiddling with syntax, and rewording both as older-style PowerShell code, I couldn’t get either of those to work.  Regardless, once I went through all those steps I was able to reinstall the SCCM client, and it now seems to work correctly.

 

The Case of Being Mostly Cloudy

For a while it seemed that everyone was always giving away copious amounts of storage on whatever cloud service they were offering.  I signed up for a few, but never really used them much.  It was storage I had available, but I didn’t do much with it.

My primary issue was that with some of these services I was signed up twice, once tied to my work email and the other to a personal account.  This rather defeated the purpose of cloud storage, not having one account easily accessible from different places.  the other was that at work I long ago switched my primary computer to be a persistent virtual machine.  the data drive on that VM is only about 6GB, which had to provide room for all of my settings and local data.  That left no room to sync Dropbox, Box, or OneDrive.

A few years ago I actually began to formalize what cloud storage services I used and how I used them.  The impetus was two fold.  First, moving to Chrome as my web browser allowed easy access to my Google Drive account.  Second, Dropbox and later OneDrive implemented automatic photo uploading from your smart phone.  This gave me yet another place to securely backup and store all of the photos on my phone.

As time progressed however, things have shifted which is causing me to reevaluate how I use my cloud storage.  Google Drive has become my standard location to save individual important files to the cloud for backup.  Any document that I want to make sure will survive a catastrophic hard drive crash ends up there.

Dropbox, meanwhile, is essentially dead to me at this point.  My automatic photo uploads quickly overwhelmed the less than 6GB of space I have.  It’s been full for quite a while, and with that size limit I’m not sure what I will us it for when I finally clean it out.  A backup for Google Drive?  That seems a bit excessive…

OneDrive is now moving to the same fate as Dropbox.  The 25GB of storage I have currently has been perfect for automatic photo uploading, and allowed me to store all of my photos there for easy review and retrieval.  Now however, in July OneDrive will reduce my space down to 5GB unless I subscribe to Office 365.  Doing so will get me 1TB (!) of space, but that means paying for another cloud storage service, because…

There is also iCloud.  For the past couple of years I paid for the lowest tier of storage, because that was what I needed to backup my iPhones, iPads, and MacBooks (I should point out that everything other than the iPhones are work-owned devices).  I have resisted converting my legacy iCloud account into a proper iCloud Drive account, but now my 20GB storage limit is forcing my hand.

On a trip this past weekend I took a ton of photos, including longs runs of burst shots with my iPhone.  My iCloud storage was already nearing the limit, and this flurry of activity has run me out of space.  So my decision now is whether or not to upgrade and convert to iCloud Drive, which means that I have to change how I think about iCloud and what I use it for.

Up until this point, iCloud for me had one purpose:  To backup my i-devices.  That was it.  It was not for file storage, syncing, or other activities such as those.  iCloud was my first line of backup to guard against an iPhone or iPad data loss.  Google Drive backed up individual files of my choosing.  OneDrive was a second backup for my pictures.  It was a three-phase approach, and now two of those phases are shifting.

As it stands right now OneDrive will likely become superfluous storage along with Dropbox.  I’m not sure what either will be used for.  Google Drive will remain as it is.  iCloud may become my primary backup method after and upgrade and conversion to iCloud Drive.  I haven’t quite worked all of this out yet.

 

The Case of the Passive Install, Acrobat 2015 (DC) Revistited

In a previous post I discussed preparing passive installers for Microsoft Office 2013 as well as Acrobat 2015 (DC).  I’ve had to make a few minor tweaks to the former, but I’ve run in to a bit of showstopper with the latter.  Although the installer for Acrobat itself works fine for normal situations, when installed as part of a base image that will have sysprep run on it the install doesn’t act properly after image deployment.

Because we are an enterprise environment we have a volume license and install of Acrobat 2015.  Consequently, we don’t have or need to sign in with any Adobe ID to use the software.  Users can if they wish, but it isn’t required.  The installer we have skips all of this and starts the program directly.  After a computer it is installed on is subjected to a sysprep and that image copied to a new computer, the Acrobat install defaults back to the original behavior of requiring an Adobe ID sign-in at launch.  If you cancel the login the program closes.

As a stopgap measure I’ve fallen back to Acrobat XI Professional.  I’m not sure how to proceed with Acrobat 2015.  For normal computers it’s just a matter of installing the program post-imaging, which isn’t that big a deal.  In my case however, I’m attempting to build base images for our virtual desktop deployment.  There is no “post-image” process I can subject my non-persistent VDI computers to, so I can’t install Acrobat 2015 after-the-fact.

I’m pretty much stuck.

 

The Case of Managed or Unmanaged Apps

As we have begun to deploy apps for iPads on a wider scale across the environment, I’m struggling how what format to do so in.  Do we use the original format of using App Store codes, or do we fully and completely switch to Managed Apps?  I’m not sure which is the correct answer.

When using app codes, the rub there is that when a code is assigned to a person, the license for that app becomes the permanently assigned to the AppleID that redeemed the code.  In contrast, when using the Managed Distribution method the license for an app is temporarily assigned to the user’s AppleID, and can be taken back using your Enterprise Mobility Management system.  So, on the surface, for an enterprise environment the latter method seems like a no-brainer.  But, there’s one small hitch.

iOS apps are sandboxed.  Meaning that each app essentially exists on an island unto itself, with limited to no interaction with other apps or the device’s underlying system.  The problem is that by doing so not only is the app isolated, but all of the data associated with that app is marooned with it.  So, for example, if I markup a PDF file with iAnnotate, that PDF remains part of and associated with the iAnnotate app.  If iAnnotate is a Managed App and I revoke the license, not only is the app itself pulled, but any data saved with that app gets pulled as well.

The ramifications of this is that at some point a student, staff, or faculty member is going to lose some of their saved files.  When the license is pulled and recovered by the EMM system it will take the user’s data with it, and I’m not sure there is any way to get it back.  The easy “not-my-fault!” answer is to tell everyone to make sure everything is saved to a cloud storage service such as Box, but expecting 100% compliance with that is foolish.  The administration might let it slide when random Suzy Student graduates and loses her annotated PDFs, but when Dr. Administrator loses a vital planning document because she’s transferring to another college it will become a much bigger deal.

So, I’m torn.  On the one hand I would like users–students especially–to be able to take their work with them when they leave, but ensuring that means giving them a permanent license to every app we buy.  I don’t want to take any of their work away from them when a license is pulled from their iPad.

On the other hand I want to save money, and offer a wider variety of apps to users.  Doing that however, requires fully using the Managed Distribution method to make every dollar stretch as far as we can, and not constantly purchasing more and more copies of apps we regularly use.  I cannot have any confidence that students will handle their data properly, however, and take the extra step–and it is an extra step with iOS–to move everything to and from the cloud when working with their files.

I’m unsure which way to lean, but I know what way the budget is going to push me.

The Case of the Passive Install: Office 2013 and Acrobat Pro 2015 (DC)

Today I spent configuring and updating our installation points for Office 2013 and Acrobat Professional 2015 a.k.a. Acrobat DC.  Office 2013 went fairly smoothly, as it usually does, configuring both a custom passive install and automatically adding Service Pack 1.  The process for it is pretty simple:

  1. Run “setup.exe /admin” from a Command Line.
  2. Set the options and changes you wish from the customization tool.
  3. Save your MSP file to the Updates folder.
  4. Download the offline installer for the latest Service Pack.
  5. Run the SP installer with the “/extract” switch and unzip all of the files to the Updates folder.

Acrobat Pro however, was a little more annoying.

  1. Download Adobe’s Customization Wizard.
  2. Once you have that, open the AcroPro.msi from within the wizard.
  3. Make your changes, generate the Transform file, and save that to the Transforms folder in the Acrobat installation directory.
  4. Save the AcroPro.msi file.

The problem I kept running into, and didn’t solve but managed to work around, was that certain options from within the Customization Tool caused an error:

Unable to Submit Changes, MSI Set Value failed, Error Code = 259.

I googled high and low, the full width and breadth of the Internet, and could find no answer.  This was further complicated a bit by being an enterprise customer, so although our package said is was “Acrobat Professional DC” it was actually “Acrobat Professional 2015,” also known as the “Classic Track.”  The slight version difference made getting the proper update package more complicated than it should have been, and may be contributing to the errors with the Customization Wizard.

In the end on the PC side I was able to change or disable most everything I wanted to in the installer.  I ended up creating a simple batch file to run the Acrobat Setup program, which runs a passive install, then follow that immediately by the latest MSP update package.  Although the update runs silently, the open window of the batch file is enough to let me know the computer is still processing the update.  All of the seems to work well enough to do what I want.

On the Mac side, things were much easier for Acrobat, after a brief false start.  Adobe does offer a Customization Wizard on the Mac side, but when attempting to use it the tool wants a serial number.  As an enterprise customer, we don’t have one the way it wants it.  So that was a bit of a dead end.

However, after getting our enterprise installer and the proper update package for Acrobat Professional 2015, I simply loaded everything into AirWatch and let our EMM system handle it.  When loaded into Products on AirWatch and pushed down via a Files/Actions profile, the installer and update PKG file ran silently and flawless.  That was definitely the way to go.

 

 

The Case of the App That Wouldn’t Go Away

I ran into an odd problem I had not encountered before: an app that wouldn’t delete from an iPad.

All of our iPads are managed by AirWatch.  As part of management I only force any enrolled iPad to have one single app, the AirWatch Agent.  To help further ensure this app gets installed I have a Compliance Policy enabled that seeks to repeatedly annoy the user into submission should they refuse the push on enrollment (or something else happens that prevents the Agent from installing.

On a coworker’s Device Enrollment Program-controlled iPad her Agent app would not install or delete.  The icon remained dimmed with a “Waiting…” status.  To solve this problem I removed her iPad from management, and re-enrolled it.  This seemed to have no immediate effect.

The next morning I put her user account into a special group to add her and her devices to an exception policy from the app push and the compliance policy.  I then sent a Query command to her iPad.

I’m not sure if these steps I took this morning fixed the problem, or if I simply wasn’t patient enough after re-enrollment the previous day.  Regardless, the Agent did finally install and her iPad has returned to compliance.