I aim to stop all food intake by 10 PM

Intermittent Fasting Update – August 2020

My weigh in this morning was 241.4 but the average for the month is a little higher.
As you can see from the below chart, this past 12 months has been ups and downs.
You can see the gradual decline from my past posts discussing my progress before falling off the wagon.
This month has been interesting for sure.
At the start of the month, .

I took a road trip with my 16 year old brother from Texas up to the Great Lakes

During our trip I stuck to my eating windows but went absolutely hog wild with what we ate.
We were having a good time trying all of the local food and enjoying each other’s company.
Side note: Wisconsin cheese curds are fucking amazing…Me I thought I’d mention something else here as well that my wife keeps bringing up.
My current ritual is to weigh in on Friday mornings right after I wake up and use the bathroom.
Depending on what I’m doing in regards to work, this might be 9-10am even if I wake up at 8am.
During that time from rise to first bathroom break, I do not drink any water.
I also try to not go ham with the water on Thursday night.
My thought is that if I wake up and chug a 32oz cup of water, that I’ve essentially “gained” 2 pounds.
In terms of keeping an accurate track of my weight loss numbers, this seems like an okay practice.
My wife thinks is unhealthy to think of water in such a negative way.
I’m not sure if we’re having some sort of mis communication as I’ve tried explaining it different way and she says she understands and still thinks it’s problematic.
Tell me your thoughts .
I’m interested to hear them.
My weekly routine.
I’ve had various people ask about my routine so I thought I’d update the internet here with what I’m doing and what’s working currently.
If I change something in between these posts I’ll try to remember to make a not of it in future posts.
Sunday.
At the start of the week, I’m usually volunteering with my local church.
Eating is a social activity we usually partake in with friends so I keep Sunday’s a bit lax.
Obviously with Covid-19, we’re not going to church or seeing friends, but I still eat Dinner and sometimes a light lunch.
I aim to stop all food intake by 10 PM.
Monday.
This is a fasting day.
No food, only water and my meds (asthma pills and some supplements ).
Ask your doctor if your medicine needs to be taken with food.
Mine do not.
Tuesday.

Around 5-6 PM (or later) I’ll eat dinner with the goal of finishing by 10 PM

My current diet consists of two meals from https://www.cleaneatzkitchen.com/ and then a bag of popcorn if I’m still hungry.
I’ve really been trying to eat my food slower and allow the hormones to work their magic to make me feel full.
The problem I still have to a degree is my ability to inhale food at such a rate that my body can’t even react to being full.
Most of the time I’ll start my fasting timer right after I finish eating, irregardless of the time.
Bed time is around midnight.
Wednesday.
Basically a repeat of Tuesday.
I may eat an extra serving of some frozen vegetables in preparation for Thursday, but it depends.
Thursday.
Another full fasting day.
No food, just water and my meds.
Like I mentioned before, I try to not go crazy with the liquids after 10pm or so because of weigh in on Friday morning.
Basically I just want to ensure that I’ll have to pee soon after waking up and that my body is actually getting rid of the extra water I don’t need.
Friday.
Weigh in day.
I have a lot of work calls on Friday’s so it’s a busy morning.
If I have to use the bathroom, I’ll do that and then immediately weigh myself.
Tracking is always done in my birthday suit for maximum results and easier reproduction.
After that, .

It’s water and meds until 5-6 PM for dinner

Same meals as the previous week, though I’ve been known to grab some Wingstop on the weekend.
Aim to finish eating by 10 PM.
Saturday.
A repeat of my other eating days basically.
If we were having social events, I’m sometimes less picky about eating times, but I still aim for 16 hours fasted as least.
Rarely do we have events before noon so in those cases I would politely decline anything until later in the day.
If we do have breakfast plans with someone, I’d probably eat, but opt for as low carb as possible.
Eggs, bacon, vegetables.
An omelette does sound pretty good right now.
???? Re: Food choices.
I’ve historically made pretty terrible food choices.
Weighing 310 pounds is evidence enough of that.
Self control when it comes to food has been very difficult for me, and frankly a lot of other things as well.
I’m very spontaneous and when I want something I tend to get it pretty quickly afterwards.
New toys, going somewhere, eating something.
I’ve really had to work at controlling these urges and work on planning ahead of time.
As part of embracing more planning and self control, I opted to purchase pre-cooked meals in bulk.
I’ve essentially removed any ability to cheat by limiting myself to these pre-purchased meals and a handful of pre-approved snacks.
So far, I have to admit that it seems to be working.
I’m not tracking it explicitly, but my daily calorie intake on feasting days is somewhere around 1200-1600 calories.
For a dude who still weighs 240 pounds, this is a significant deficit from what my body needs to function, thereby forcing it to burn more fat.
During my first attempt at this last year, I would say I was normally eating more than 2,000 calories a day making my own food.
The portion sizes were larger mainly, and I wasn’t afraid to load up on low carb, yet calorie rich, liquids like ranch or butter.
I wonder when I’ll get to eat “normal” food again but for now this is working.
Especially with the world in a state of quasi-lockdown, it is nice to not have to worry about making grocery store runs.
My wife buys food for herself on her way to / from work.
Conclusion.
If you have questions, .

Feel free to leave a comment or reach out on Twitter @daronspence

I’m sometimes slow to reply here on the site, but if more people comment it might motivate me ???? Also feel free to join my fasting circle on the app I use.
iOS only though.
Sorry.
https://lifefastingtracker.app.link/z3D7qHg2W8.
August 14, 2020August 14, .

2020 Uncategorized Leave a Reply Cancel reply

Required fields are marked Name Email Intermittent Fasting Update – July 2020 Intermittent Fasting Update – August 2020.
on.
on.
on.
on.
on.

Send to Email Address Your Name Your Email Address

MSP Gartner’s cloud IaaS assessments

The messy dilemma of cloud operations.
Sep 3 Responsibility for cloud operations is often a political football in enterprises.
Sometimes nobody wants it; it’s a toxic hot potato that’s apparently coated in developer cooties.
Sometimes everybody wants it, and some executives think that control over it are going to ensure their next promotion / a handsome bonus / attractiveness for their next job.
Frequently, developers and the infrastructure & operations (I&O) orgs clash over it.
Sometimes, CIOs decide to just stuff it into a Cloud Center of Excellence team which started out doing architecture and governance, and then finds itself saddled with everything else, too.
Lots of arguments are made for it to live in particular places and to be executed in various ways.
There’s inevitably a clash between the “boring” stuff that is basically lifted-and-shifted and rarely changes, and the fast-moving agile stuff.
And different approaches to IaaS, PaaS, and SaaS.
And and and… Well, the fact of the matter is that multiple people are probably right.
You don’t actually want to take a one-size-fits-all approach.
You want to fit operational approaches to your business needs.
And you maybe even want to have specialized teams for each major hyperscale provider, even if you adopt some common approaches across a multicloud environment.
(Azure vs.
non-Azure, i.e.
Azure vs.
AWS, is a common split, often correlated closely to Windows-based application environments vs Linux-based application environments.) Ideally, you’re going to be highly automated, agile, cloud-native, and collaborative between developers and operators (i.e.
DevOps).
But maybe not for everything (i.e.
not all apps are under active development).
Plus, once you’ve chosen your basic operations approach (or approaches), you have to figure out how you’re going to handle cloud configuration, release engineering, and security responsibilities.
(And all the upskilling necessary to do that well!) That’s where people tend to really get hung up.
How much responsibility can I realistically push to my development teams.
How much responsibility do they want.
How do I phase in new operational approaches over time.
How do I hook this into existing CI/CD, agile, and DevOps initiatives.
There’s no one right answer.
However, there’s one answer that is almost always wrong, and that’s splitting cloud operations across the I&O functional silos — i.e., the server team deals with your EC2 VMs, your NetApp storage admin deals with your Azure Blobs, your F5 specialist configures your Google Load Balancers, your firewall team fights with  your network team over who controls the VPC config (often settled, badly, by buying firewall virtual appliances), etc.
When that approach is taken, the admins almost always treat the cloud portals like they’re the latest pointy-clicky interface for a piece of hardware.
This pretty much guarantees incompetence, lack of coordination, and gross inefficiency.
It’s usually terrible at regardless of what scale you’re at.
Unfortunately, it’s also the first thing that most people try (closely followed by massively overburdening some poor cloud architect with Absolutely Everything Cloud-Related.) What works for most orgs: Some form of cloud platform operations, where cloud management is treated like a “product”.  It’s almost an internal cloud MSP approach, where the cloud platform ops team delivers a CMP suite, cloud-enabled CI/CD pipeline integrations, templates and automation, other cloud engineering, and where necessary, consultative assistance to  coders and to application management teams.
That team is usually on call for incident response, but the first line for incidents is usually the NOC or the like, and the org’s usual incident management team.
But there are lots of options.

Gartner clients: Want a methodical dissection of pros and cons; cloud engineering

operating, and administration tasks; job roles; coder responsibilities; security integration; and other issues.
Read my new note, “Comparing Cloud Operations Approaches“, which looks at eleven core patterns along with guidance for choosing between them, andmaking a range of accompanying decisions.
Tweet.
Posted in Governance 1 Comment Tags: , operations Tiering self-service by user competence.
Aug 10 A nontrivial chunk of my client conversations are centered on the topic of cloud IaaS/PaaS self-service, and how to deal with development teams (and other technical end-user teams, i.e.
data scientists, researchers, hardware engineers, etc.) that use these services.
These teams, and the individuals within those teams, often have different levels of competence with the clouds, operations, security, etc.
but pretty much all of them want unfettered access.
Responsible governance requires appropriate guidelines (policies) and guardrails, and some managers and architects feel that there should be one universal policy, and everyone — from the highly competent digital business team, to the data scientists with a bit of ad-hoc infrastructure knowledge — should be treated identically for the sake of “fairness”.
This tends to be a point of particular sensitivity if there are numerous application development teams with similar needs, but different levels of cloud competence.
In these situations, applying a single approach is deadly — either for agility or your crisis-induced ulcer.
Creating a structured, tiered approach, with different levels of self-service and associated governance guidelines and guardrails, is the most flexible  approach.
Furthermore, teams that deploy primarily using a CI/CD pipeline have different needs from teams working manually in the cloud provider portal, which in turn are different from teams that would benefit from having an easy-vend template that gets provisioned out of a ServiceNow request.
The degree to which each team can reasonably create its own configurations is related to the team’s competence with cloud solution architecture, cloud engineering, and cloud security.
Not every person on the team may have a high level of competence; in fact, that will generally not be the case.
However, the very least, for full self-service there needs to be at least one person with strong competencies in each of those areas, who has oversight responsibilities, acts an expert (provides assistance/mentorship within the team), and does any necessary code review.
If you use CI/CD, you also want automation of such review in your pipeline, that includes your infrastructure-as-code (IaC) and cloud configs, not just the app code; i.e.
a tool like Concourse Labs).
Even if your whole pipeline isn’t automated, review of IaC during the dev stage, and not just when it triggers a cloud security posture management tool (like Palo Alto’s Prisma Cloud or Turbot), whether in dev, test, or production.
Who determines “competence”.
To avoid nasty internal politics, it’s best to set this standard objectively.
Certifications are a reasonable approach, but if your org isn’t the sort that tends to pay for internal certifications or the external certifications (AWS/Azure Solution Architect, DevOps Engineer, Security Engineer, etc.) seem like too high a bar, you can develop an internal training course and certification.
It’s not a bad idea for all of your coders (whether app developers, data scientists, etc.) that use the cloud to get some formal training on creating good and secure cloud configurations, anyway.

(For Gartner clients: I’m happy to have a deeper discussion in inquiry

And yes, a formal research note on this is currently going through our editing process and will be published soon.) Tweet.
Posted in Governance Leave a comment Tags: IaaS, PaaS Hunting the Dread Gazebo of Repatriation.
Jul 27 (Confused by the title of this post.
Read this brief anecdote.) The myth of cloud repatriation refuses to die, and a good chunk of the problem is that users (and poll respondents) use “repatriation” is a wild array of ways, but non-cloud vendors want you to believe that “repatriation” means enterprises packing up all their stuff in the cloud and moving it back into their internal data centers — which occurs so infrequently that it’s like a sasquatch sighting.
A non-comprehensive list of the ways that clients use the term “repatriation” that have little to nothing to do with what non-cloud vendors (or “hybrid”) would like you to believe: Outsourcing takeback. The origin of the term comes from orgs that are coming back from traditional IT outsourcing.
However, we also hear cloud architects say they are  “repatriating” when they gradually take back management of cloud workloads from a cloud MSP; the workloads stay in the cloud, though.
Migration pause. Some migrations to IaaS/IaaS+PaaS do not go well.
This is often the result of choosing a low-quality MSP for migration assistance, or rethinking the wisdom of a lift-and-shift.
Orgs will pause, switch MSPs and/or switch migration approaches (usually to lift-and-optimize), and then resume.
Some workloads might be temporarily returned on-premise while this occurs.
SaaS portfolio rationalization. Sprawling adoption of SaaS, at the individual, team, department or business-unit level, can result in one or more SaaS applications being replaced with other, official, corporate SaaS (for instance, replacing individual use of Dropbox with an org-wide Google Drive implementation as part of G-Suite).
Sometimes, the org might choose to build on-premises functionality instead (for instance, replacing ad-hoc SaaS analytics with an on-prem data warehouse and enterprise BI solution).
This is overwhelmingly the most common form of “cloud repatriation”.
Development in the cloud, production on premises.
While the dev/prod split of environments is much less common than it used to be, some organizations still develop in cloud IaaS and then run the app in an on-prem data center in production.
Orgs like this will sometimes say they “repatriate” the apps for production.
The Oops. Sometimes organizations attempt to put an application in the cloud and it Just Doesn’t Go Well.
Sometimes the workload isn’t a good match for cloud services in general.
Sometimes the workload is just a bad match for the particular provider chosen.
Sometimes they make a bad integrator choice, or their internal cloud skills are inadequate to the task.
Whatever it is, people might hit the “abort” button and either rethink and retry in the cloud, or give up and put it on premises (either until they can put together a better plan, or for the long term).
Of course, there are the sasquatch sightings, too, like the Dropbox migration from AWS (also see the five-year followup), but those stories rarely represent enterprise-comparable use cases.
If you’re one of the largest purchasers of storage on the planet, and you want custom hardware, absolutely, DIY makes sense.
(And Dropbox continues to do some things on AWS.) Customers also engage in broader strategic application portfolio rationalizations that sometimes result in groups of applications being shifted around, based on changing needs.
While the broader movement is towards the cloud, applications do sometimes come back on-premises, often to align to data gravity considerations for application and data integration.
None of these things are in any way equivalent to the notion that there’s a broad or even common movement of workloads from the cloud back on-premises, though, especially for those customers who have migrated entire data centers or the vast majority of their IT estate to the cloud.
(Updated with research: In my note for Gartner clients, “Moving Beyond the Myth of Repatriation: How to Handle Cloud Projects Failures”, I provide detailed guidance on why cloud projects fail, how to reduce the risks of such projects, and how — or if — to rescue troubled cloud projects.) Tweet.
Posted in Strategy Leave a comment Tags: , migration, risk Building multicloud expertise.
Jul 16 Building cloud expertise is hard.
Building multicloud expertise is even harder.
By “multicloud” in this context, I mean “adopting, within your organization, multiple cloud providers that do something similar” (such as adopting both AWS and Azure).
Integrated IaaS+PaaS providers are complex and differentiated entities, in both technical and business aspects.
Add in their respective ecosystems — and the way that “multicloud” vendors, managed service providers (MSPs) etc.
often deliver subtly (or obviously) different capabilities on different cloud providers — and you can basically end up with a multicloud katamari that picks up whatever capabilities it randomly rolls over.
You can’t treat them like commodities (a topic I cover extensively in my research note on Managing Vendor Lock-In in Cloud IaaS).
For this reason, cloud-successful organizations that build a Cloud Center of Excellence (CCOE), or even just try to wrap their arms around some degree of formalized cloud operations and governance, almost always start by implementing a single cloud provider but plan for a multicloud future.   Successfully multicloud organizations have cloud architects that deeply educate themselves on a single provider, and their cloud team initially builds tools and processes around a single provider — but the cloud architects and engineers also develop some basic understanding of at least one additional provider in order to be able to make more informed decisions.
Some basic groundwork is laid for a multicloud future, often in the form of frameworks, but the actual initial implementation is single-cloud.
Governance and support for a second strategic cloud provider is added at a later date, and might  not necessarily be at the same level of depth as the primary strategic provider.
Scenario-specific (use-case-specific or tactical) providers are handled on a case-by-case basis; the level of governance and support for such a provider may be quite limited, or may not be supported through central IT at all.
Individual cloud engineers may continue to have single-cloud rather than multicloud skills, especially because being highly expert in multiple cloud providers tend to boost market-rate salaries to levels that many enterprises and mid-market businesses consider untenable.
(Forget using training-cost payback as a way to retain people; good cloud engineers can easily get a signing bonus more than large enough to deal with that.) In other words: while more than 80% of organizations are multicloud, very few of them consider their multiple providers to be co-equal.
Tweet.
Posted in Governance Leave a comment Tags: CCoE, , IaaS, multicloud Refining the Cloud Center of Excellence.
Jul 13 What sort of org structures work well for helping to drive successful cloud adoption.
Every day I talk to businesses and public-sector entities about this topic.
Some have been successful.
Others are struggling.
And the late-adopters are just starting out and want to get it right from the start.
Back in 2014, I started giving conference talks about an emerging industry best practice — the “Cloud Center of Excellence” (CCOE) concept.
I published a research note at the start of 2019 distilling a whole bunch of advice on how to build a CCOE, and I’ve spent a significant chunk of the last year and a half talking to customers about it.
Now I’ve revised that research, turning it into a hefty two-part note on How to Build a Cloud Center of Excellence: part 1 (organizational design) and part 2 (Year 1 tasks).
Gartner’s approach to the CCOE is fundamentally one that is rooted in the discipline of enterprise architecture and the role of EA in driving business success through the adoption of innovative technologies.
We advocate a CCOE based on three core pillars — governance (cost management, risk management, etc.), brokerage (solution architecture and vendor management), and community (driving organizational collaboration, knowledge-sharing, and cloud best practices surfaced organically).
Note that it is vital for the CCOE to be focused on governance rather than on control.
Organizations who remain focused on control are less likely to deliver effective self-service, or fully unlock key cloud benefits such as agility, flexibility and access to innovation.
Indeed, IT organizations that attempt to tighten their grip on cloud control often face rebellion from the business that actually decreases the power of the CIO and the IT organization.
Also importantly, we do not think that the single-vendor CCOE approaches (which are currently heavily advocated by the professional services organizations of the hyperscalers) are the right long-term solution for most customers.

A CCOE should ideally be vendor-neutral and span IaaS

PaaS, and SaaS in a multicloud world, with a focus on finding the right solutions to business problems (which may be cloud or noncloud).
And a CCOE is not an IaaS/PaaS operations organization — cloud engineering/operations is a separate set of organizational decisions (I’ll have a research note out on that soon, too).
Please dive into the research (Gartner paywall) if you are interested in reading all the details.
I have discussed this topic with literally thousands of clients over the last half-dozen years.
If you’re a Gartner for Technical Professionals client, I’d be happy to talk to you about your own unique situation.
Tweet.
Posted in Governance Leave a comment Tags: CCoE, , Gartner, IaaS, PaaS, research Finally, private cloud identical to public cloud.
Jul 9 Digging into my archive of past predictions… In a research note on the convergence of public and private cloud, published almost exactly eight years ago in July 2012, I predicted that the cloud IaaS market would eventually deliver a service that delivered a full public cloud experience as if it were private cloud — at the customer’s choice of data center, in a fully single-tenant fashion.
Since that time, there have been many attempts to introduce public-cloud-consistent private cloud offerings.
Gartner now has a term, “distributed cloud”, to refer to the on-premises and edge services delivered by public cloud providers.
AWS Outposts deliver, as a service, a subset of AWS’s incredibly rich product porfolio.
Azure Stack (now Azure Stack Hub) delivers, as software, a set of “Azure-consistent” capabilities (meaning you can transfer your scripts, tooling, conceptual models, etc., but it only supports a core set of mostly infrastructure capabilities).
Various cloud MSPs, notably Avanade, will deliver Azure Stack as a managed service.
And folks like IBM and Google want you to take their container platform software to facilitate a hybrid IT model.
But no one has previously delivered what I think is what customers really want: Location of the customer’s choice.
Single-tenant; no other customer shares the hardware/service; data guaranteed to stay within the environment.
Isolated control plane and private self-service interfaces (portal, API endpoints); no tethering or dependence on the public cloud control plane, or Internet exposure of the self-service interfaces.
Delivered as a service with the same pricing model as the public cloud services; not significantly more expensive than public cloud as long as minimum commitment is met.
All of the provider’s services (IaaS+PaaS), identical to the way that they are exposed in the provider’s public cloud regions.
Why do customers want that.
Because customers like everything the public cloud has to offer — all the things, IaaS and PaaS — but there are still plenty of customers who want it on-premises and dedicated to them.
They might need it somewhere that public cloud regions generally don’t live and may never live (small countries, small cities, edge locations, etc.), they might have regulatory requirements they believe they can only meet through isolation, they may have security (even “national security”) requirements that demand isolation, or they may have concerns about the potential to be cut off from the rest of the world (as the result of sanctions, for instance).  And because when customers describe what they want, they inevitably ask for sparkly pink unicorns, they also want all that to be as cheap as a multi-tenant solution.
And now it’s here, and given that it’s 2020… the sparkly pink unicorn comes from Oracle.
Specifically, the world now has Oracle Dedicated Regions Cloud @ Customer.
(Which I’m going to shorthand as OCI-DR, even though you can buy Oracle SaaS hosted on this infrastructure) OCI’s region model, unlike its competitors, has always been all-services-in-all-regions, so the OCI-DR model continues that consistency.
In an OCI-DR deal, the customer basically provides colo (either their own data center or a third party colo) to Oracle, and Oracle delivers the same SLAs as it does in OCI public cloud.
The commit is very modest — it’s $6 million a year, for a 3-year minimum, per OCI-DR Availability Zone (a region can have multiple AZs, and you can also buy multiple regions).
There are plenty of cloud customers that easily meet that threshold.
(The typical deal size we see for AWS contracts at Gartner is in the $5 to $15 million/year range, on 3+ year commitments.) And the pricing model and actual price for OCI-DR services is identical to OCI’s public regions.
The one common pink sparkly desire that OCI doesn’t meet is the ability to use your own hardware, which can help customers address capex vs.
opex desires, may have perceived cost advantages, and may address secure supply chain requirements.
OCI-DR uses some Oracle custom hardware, and the hardware is bundled as part of the service.
I predict that this will raise OCI’s profile as an alternative to the big hyperscalers, among enterprise customers and even among digital-native customers.
Prior to today’s announcement, I’d already talked to Gartner clients who had been  seriously engaged in sales discussions on OCI-DR; Oracle has quietly been actively engaged in selling this for some time.
Oracle has made significant strides (surprisingly so) in expanding OCI’s capabilities over this last year, so when they say “all services” that’s now a pretty significant portfolio — likely enough for more customers to give OCI a serious look and decide whether access to private regions is worth dealing with the drawbacks (OCI’s more limited ecosystem and third-party tool support probably first and foremost).
As always, I’m happy to talk to Gartner clients who are interested in a deeper discussion.
We’ve recently finished our Solution Scorecards (an in-depth assessment of 270 IaaS+PaaS capabilities), including our new assessment of OCI, and although I can’t share the scores themselves pre-publication (it’ll probably be end of July before our editing team gets it published), myself and my colleagues can discuss our findings now.
Update: The scores are summarized in a publicly-reprinted document now.
Tweet.
Posted in 7 Comments Tags: , IaaS, Oracle Outside help can accelerate cloud journeys.
Jul 8 Note: It’s been a while since I blogged actively, and I’m attempting to return to writing short-form posts on a regular basis.
In my current role within Gartner for Technical Professionals, I talk to a lot of cloud architects, engineers, and other technical individual contributors who are concerned that seeking outside assistance for cloud implementations will lead to long-term outsourcing, lack of self-sufficiency, lack of internal cloud skills, and loss of control.
(The CIOs I talk to may have similar concerns, although typically more related to CIO-level concerns about outsourcing.) Those concerns are real, but getting expert outside assistance — from a cloud managed service provider (MSP), consultancy / professional services provider / systems integrator, or even an individual contractor — doesn’t have to mean a sliding down a slippery slope into cloud helplessness.
Things I’ve learned over the past 5+ years of client conversations: Use of expert external assistance accelerates and improves cloud adoption.
Organizations can strongly benefit from expert assistance.
Such assistance reduces implementation times, raises implementation quality, lowers implementation costs as well as long-term total cost of ownership, and provides a better foundation for the organization to enhance its cloud usage in the future.
Low-quality external assistance can have a devastating impact on cloud outcomes.
Choosing the wrong vendor can be highly damaging, resulting in wasted resources, and failure to achieve either the expected business or technical outcomes.
There must be a skills transition plan in place.
Unless the organization expects to outsource cloud operations or application development over the long term, the MSP or consultancy must be contractually obligated to transfer knowledge and skills to the organization’s internal employees.
This transfer must occur gradually, over a multi-month or even multi-year period.
It is insufficient to do a “handoff” at the end of the contract.
The organization needs to shift into a new mode of working as well as gain cloud competence, and this is best done collaboratively, with the external experts handing over responsibilities on a gradual basis.
The organization needs to retain responsibility for cloud strategy and governance.
It is dangerous for organizations to hand over strategic planning to an external vendor, as it is unlikely that plans produced by an external party will be optimally aligned to the organization’s business needs.
For similar reasons, the organization also needs to retain responsibility for governance, including the creation of policy.
An external party may be able to provide useful advice and implementation assistance, but should not be allowed to make strategy or policy decisions.
You can cut years off your migration efforts, and significantly accelerate getting your foundations laid (building a Cloud Center of Excellence, etc.) by getting the right entity to do at least some of it with you, rather than doing all of it for you.
Tweet.
Posted in Strategy 1 Comment Tags: , MSP Gartner’s cloud IaaS assessments, 2019 edition.
Jul 30 We’ve just completed our 2019 evaluations of cloud IaaS providers, resulting in a new Magic Quadrant , Critical Capabilities , and six Solution Scorecards — one for each of the providers included in the Magic Quadrant.
This process has also resulted in fresh benchmarking data within Gartner’s Cloud Decisions tool.

A SaaS offering available to Gartner for Technical Professionals clients

which contains benchmarks and monitoring results for many cloud providers.
As part of this, we are pleased to introduce Gartner’s new Solution Scorecards,  an updated document format for what we used to call In-Depth Assessments.
Solution Scorecards assess an individual vendor solution against our recently-revised Solution Criteria (formerly branded Evaluation Criteria).
They are highly detailed documents — typically 60 pages or so, assessing 265 individual capabilities as well as providing broader recommendations to Gartner clients.
The criteria are always divided into Required, Preferred, and Optional categories — essentially, things that everyone wants (and where they need to compensate/risk-mitigate if something is missing), things that most people want but can live without or work around readily, and things that are use case-specific.
The Required, Preferred, and Optional criteria are weighted into a 4:2:1 ratio in order to calculate an overall Solution Score.

2019 Scores If you are a Gartner for Technical Professionals client

the scorecards are available to you today.
You can access them from the links below (Gartner paywall): Amazon Web Services.
Microsoft Azure.
Google Cloud Platform.
IBM Cloud.
Oracle Cloud Infrastructure.
Alibaba Cloud International (English-language offerings available outside of China).
We will be providing a comparison of these vendors and their Solution Scorecards at the annual “Cloud Wars” presentation at the Gartner Catalyst conference — one of numerous great reasons to come to San Diego the week of August 11th (or Catalyst UK in London the week of September 15th).
Catalyst has tons of great content for cloud architects and other technical professionals involved in implementing cloud computing.  Note that we are specifically assessing just the integrated IaaS+PaaS offerings — everything offered through a single integrated self-service experience and on a single contract.
Also, only cloud services count; capabilities offered as software, hosting, or a human-managed service do not count.
Capabilities also have to be first-party.
Also note that this is not a full evaluation of a cloud provider’s entire portfolio.
The scorecards have “IaaS” in the title, and the scope is specified clearly in the Solution Criteria.
For the details of which specific provider services or products were or were not evaluated, please refer to each specific Scorecard document.
All the scores are current as of the end of March, and count only generally-available (GA) capabilities.
Because it takes weeks to work with vendors for them to review and ensure accuracy, and time to edit and publish, some capabilities will have gone beta or GA since that time; because we only score what we’re able to test, the evaluation period has a cut-off date.
After that, we update the document text for accuracy but we don’t change the numerical scores.
We expect to update the Solution Scorecards approximately every 6 months, and working to increase our cadence for evaluation updates.
This year’s scores vs.
last year’s When you review the scores, you’ll see that broadly, the scores are lower than they were in 2018, even though all the providers have improved their capabilities.
There are several reasons why the 2019 scores are lower than in previous years.
(For a full explanation of the revision of the Solution Criteria in 2019, see the related blog post .) First, for many feature-sets, several Required criteria were consolidated into a single multi-part criterion with “table stakes” functionality; missing any part of that criterion caused the vendor to receive a “No” score for that criterion (“Yes” is 1 point; “No” is zero points; there is no partial credit).
The scorecard text explains how the vendor does or does not meet each portion of a criterion.
The text also mentions if there is beta functionality, or if a feature was introduced after the evaluation period.
Second, many criteria that were Preferred in 2018 were promoted to Required in 2019, due to increasing customer expectations.
Similarly, many criteria that were Optional in 2018 are now Preferred.
We introduced some brand-new criteria to all three categories as well, but providers that might have done well primarily on table-stakes Required functionality in previous years may have scored lower this year due to the increased customer expectations reflected by revised and new criteria.
Customizing the scores The solution criteria, with all of the criteria detail, is available to all Gartner for Technical  Professionals clients, and comes with a spreadsheet that allows you to score any provider yourself; we also provide a filled-out spreadsheet with each Solution Scorecard so you can adapt the evaluation for your own needs.
The Solution Scorecards are similarly transparent on which parts of a criterion are or aren’t met, and we link to documentation that provides evidence for each point (in some cases Gartner was provided with NDA information, in which case we tell you how to get that info from the provider).  This allows you to customize the scores as you see fit.
Thus, if you decide that getting 3 out of 4 elements of a criteria is good enough for you, or you think that the thing they miss isn’t relevant to you, or you want to give the provider credit for newly-released capabilities, or you want to do region-specific scoring, you can modify the spreadsheet accordingly.  If you’re a Gartner client and are interested in discussing the solution criteria, assessment process, and the cloud providers, please schedule an inquiry or a 1-on-1 at Catalyst.
We’d be happy to talk to you.
Tweet.
Posted in Leave a comment Tags: , Gartner, GTP, IaaS, MQ, .

Research Updating Gartner’s cloud IaaS evaluation criteria

Jul 30 In February of this year, .

We revised the Evaluation Criteria for Cloud IaaS (Gartner paywall)

The evaluation criteria (now rebranded Solution Criteria) are essentially the sort of criteria that prospective customers typically include in RFPs.
They are highly detailed technical criteria, along with some objectively-verifiable business capabilities (such as elements in a technical support program, enterprise ISV partnerships, ability to support particular compliance requirements, etc.).
The Solution Criteria are intended to help cloud architects evaluate cloud IaaS providers (and integrated IaaS+PaaS providers such as the hyperscale cloud providers), whether public or private, or assess their own internal private cloud.
We are about to publish Solution Scorecards (formerly branded In-Depth Assessments) for multiple providers; Gartner analysts assess these solutions hands-on and determine whether or not they have capabilities that meet the requirements of a criterion.
The TL;DR version In summary, we revised the Solution Criteria extensively in 2019, and the results were as follows: The criteria have been updated to reflect the current IaaS+PaaS market.
Expectations are significantly higher than in previous years.
Expectations have been aligned to other Gartner research, taking into account customer wants and needs in the relevant market, not just in a cloud-specific context.
Many capabilities have been consolidated and are now required.
Most vendor scores in the Solution Scorecards have dropped dramatically since last year, and there is a much broader spread of vendor scores.
The Evolution of Customer Demands The Evaluation Criteria (EC) for Cloud IaaS was first published in 2012.
It received a significant update every other year (each even-numbered year) thereafter.
When first written, the EC reflected the concerns of our clients at the time, many of whom were infrastructure and operations (I&O) professionals with VMware backgrounds.
With each iteration, the EC evolved significantly, yet incrementally.
In the meantime, the market moved extremely quickly.
The market evolution towards cloud integrated IaaS and PaaS (IaaS+PaaS) providers, and the market exit (or strategic de-investment) of many of the “commodity” providers, radically changed the structure and nature of the market over time.
Cloud IaaS providers weren’t just expected to provide “hardware infrastructure”, but also “software infrastructure”, including all of the necessary management and automation.
This essentially forced these providers into introducing services that compete in many IT markets and in an extraordinary number of software niches.
Furthermore, as the market matured, the roles and expectations of our clients also evolved significantly.
The focus shifted to enterprise-wide initiatives, rather than project-based adoption.
Digital business transformation elevated the importance of cloud-native workloads, while IT transformation emphasized the need for high-quality cloud migration of existing workloads.
The notion that a cloud IaaS provider could successfully run all, or almost all, of a customer’s IT became part of the assumptions that needed to underpin the provider evaluation process.  Today’s cloud IaaS customers have high expectations.
Experienced customers are becoming more sophisticated, but late adopters also have high expectations of a provider that have to be met to help the customer overcome barriers to adoption.
For 2019, we decided to take a look at the EC“from scratch”,  in order to try to construct a list of criteria that are the most relevant to  the initiatives of customers today.
In many cases, our clients are trying to pick a primary strategic IaaS provider.
In other cases, our clients already have a primary provider but are trying to pick a strategic secondary provider as they implement a multicloud strategy.
Finally, some of our clients are choosing a provider for a tactical need, but still need to understand that provider’s capabilities in detail.
Constructing the Revision The revision needed to keep a similar number of criteria (in order to keep the assessment time manageable and the assessment itself at a readable length) — we ended up with 265 for 2019.
In order to keep the total number of criteria down, we needed to consolidate closely-related criteria into a single criterion.
Many criteria became multi-part as a result.
We tried to consolidate the “table stakes” functionality that could be assumed to be a part of all (or almost all) cloud IaaS offerings, in order to make room for more differentiated capabilities.  We tried to be as vendor-neutral as possible.
The evaluation criteria have evolved since the initial 2012 introduction; when we introduced new criteria in the past, we often ended up with criteria requirements that closely mirrored the feature-set of the first provider to offer a capability, since that provider shaped customer expectations.
In this 2019 revision, we tried to go back to the core customer requirements, without concern as to whether cloud provider implementations fully aligned with those requirements — the criteria are intended to reflect what customers want and not what vendors offer .  There are requirements that no vendors meet, but which we often hear our clients ask for; in such cases we tried to phrase those requirements in ways that are reasonable and implementable at scale, as it’s okay for the criteria to be somewhat aspirational for the market.
We tried to make sure that the criteria were worded using standard Gartner terms or general market terminology, avoiding vendor-specific terms.

(Note that because vendors not-infrequently adopt Gartner terms

there were cases where providers had adopted terminology from earlier versions of EC, and we made no attempt to alter such terms.) We tried to keep to requirements, without dictating implementation, where possible.
However, we had to keep in mind that in cloud IaaS, where there are customers who want fine-grained visibility and control over the infrastructure, there still must be implementation specificity when the customer explicitly wants those elements exposed.
Defining the Criteria During the process of determining the criteria, we sought input broadly within Gartner, both in terms of discussing the criteria with other analysts as well as incorporating things from existing Gartner written research.
(And the criteria reflect, as much as possible, the discussions we’ve had with clients about what they’re looking for, and what they’re putting into their RFPs.) In some cases, we needed input from specialists in a topic.
In some areas of technology, clients who need to have deep-dive discussions on features may talk almost exclusively to analysts specialized in those areas.
Those analysts are familiar with current requirements as well as the future of those technology areas, and are thus the best source for determining those needs.
For example, areas such as machine learning and IoT are primarily covered by analysts with those specializations, even when the customers are implementing cloud solutions.
There are also areas, such as Security, where we have detailed cloud recommendations from those teams.
So we extensively incorporated their input.
We also looked at non-cloud capabilities when there were market gaps relative to customer desires.
There are areas where either cloud providers do not currently have capabilities, or where those capabilities are relatively nascent.
Thus, we needed to identify where customers are using on-premises solutions, and want cloud solutions.
We also needed to determine what the “minimum viable product” should be for the purposes of constructing a criterion around it.
Feedback from non-cloud analysts was also important because it identified areas where clients were not using a cloud solution because of something that was missing.
In many cases, these were not technology features, but issues around transparency, or the lack of solutions acceptable on a global basis.
Finally,  the way that customers source solutions, build applications, and manage their data is changing.
We tried to ensure that the new criteria aligned with these trends.
Because more and more of our clients are deploying cloud solutions globally, every criterion also had some requirements as to its global availability.
These are used only for advisory purposes and are not part of scoring.  The vendors were allowed to give feedback on the criteria prior to publication.
We wanted to check if the criteria were reasonable, and seemed fair.
We incorporated feedback that constituted good, vendor-neutral suggestions that aligned to customer requirements.
The End Results When you see the Solution Scorecards, you may be surprised by lower scores on the part of many of the providers.
We’re being transparent about the Evaluation Criteria (Solution Criteria) revision in order to help you understand why the scores are lower.
The lower scores were an unintentional side-effect of the revision, but reflect, to some degree, the state of the market  relative to the very high expectations of customers.
Note that this year’s lower scores do not indicate that providers have “gone backwards” or removed capabilities; they just reflect the provider’s status against a raised bar of customer expectations.  We expect that when we update the scorecards in the second half of this year, scores will increase, as many of the vendors have since introduced missing capabilities, or will do so by the next update.  We retain confidence that the solution criteria are a good reflection of a broad range of current customer expectations.
Because many vendors are doing a good job of listening to what customers and prospects want, and planning accordingly, we think that the solution criteria will also be reflected in future vendor roadmaps and market development.
We discuss the Solution Scorecards and scores in a separate blog post.
Tweet.
Posted in 1 Comment Tags: , Gartner, GTP, IaaS, MQ, research Transitioning roles at Gartner.
Jun 19 I’m excited to announce that, as of yesterday, .

I’ve joined the Gartner for Technical Professionals (GTP) team here at Gartner

For years, I’ve enjoyed working closely with Kyle Hilgendorf, Eli Khnaser, Mindy Cancila, Doug Toombs, Marco Meinardi, Alan Waite, and many others in our GTP research division, and I’m looking forward to deepening this collaboration.
Those of you who have known me for a while might remember that I spent more than 15 years in Gartner’s Technology and Service Provider division, and then, for the last two and a half years, I’ve been in the Infrastructure Strategies team in Gartner’s IT Leaders group.
Throughout all of these years, I’ve written a lot of deep-dive research for both managerial and technical audiences, and spent a lot of time talking to everyone from the CIO to the sourcing managers and engineers in the trenches, as well as vendors and investors.
I’ve always enjoyed being more hands-on, though, and the move into GTP will give me a chance to write more in-depth practical advice.
For the next couple of months, I’ll be in a state of transition.
I’ll be doing both types of inquiry for a while, but in the future, .

Clients will need a Gartner GTP “seat” to speak with me

In the next month or two, you’ll see me publish a bunch of research into the ITL agendas, as I finish up that work, and then rethink my previously-planned agenda (much of which will still likely be published, albeit into GTP).

I’ll be at the Gartner Catalyst conference in August

with my first GTP presentation, called “Improve Cloud Operations with Site Reliability Engineering”, focused on how to take the principles, practices, and tools used to manage massive cloud-native applications, and apply them at an enterprise level for cloud operations at a more typical scale.
The cloud IaaS team at Gartner is exceptionally collaborative across our divisions and teams, and I expect to continue working very closely with all the awesome analysts that I’ve worked with over the years.
Gartner is backfilling my previous role, and I highly encourage any cloud IaaS experts out there to reach out to me if you’re interested.
Here’s the job req: https://bit.ly/2JBagOb Tweet.
Posted in Analyst Life, .

Uncategorized 4 Comments Tags: Gartner ← Older Posts

Categories Select Category Analyst Life  (39) Applications  (3) Gadgets  (2) Gaming  (6) Governance  (4) Industry  (107) Infrastructure  (210) Marketing  (13) Personal  (3) Strategy  (2) Uncategorized  (2).
on.
lisa on.
on.
tyffresy on.
on.
lisa on.
on.
Jesse Freund on.
Michael Liebow on.
Rahul on.
Blog at WordPress.com.
Send to Email Address Your Name Your Email Address.

Monster Hunter Memoirs: Sinners – This one is a tough read

My Reading List (2018) My Reading List (2018) Jan 07, 2019 No comments yet Tags: books, reading, recommendations, reviews After college, I resolved to read one book a month.
It can be fiction, non-fiction, technical, business-oriented, or whatever as the goal was to always be absorbing and digesting new ideas and information, even just for fun.
More recently, I’ve generally tried to read 3 per month which works great with a Kindle and a ton of travel.
This year I only read 25 because I had less travel days (yay!), picked up a couple iPad games (boo!), and used my flight time to actually work (meh).
Anyway, here are my top five in order: Monster Hunter Memoirs: Saints by John Ringo with Larry Correia.

Monster Hunter Memoirs: Grunge by John Ringo with Larry Correia

The Hard Thing about Hard Things by Ben Horowitz

Winged Hussars by Mark Wandrey.

Monster Hunter Memoirs: Sinners by John Ringo with Larry Correia

And here are the 25 books I completed in 2018, sorted by author: Kai Wai Cheah I read the first of this series last year and was intrigued by the Shadowrun-ish concept of an alternate universe with various forms of magic and beyond current technology running amok.
In this one, the team we met last time is under heavy attack and has to push back.
Hammer of the Witches.
Robert Cialdini I’m ashamed to admit that while I started reading this a few years ago, I never finished it until this year.
Regardless, this was a great discussion of how people can be and are persuaded by everything around them.
This discusses everything from purposeful pushes by marketing and sales people to unconscious nudges by our peers.
Influence: The Psychology of Persuasion.
Larry Correia I read most of Monster Hunter series in 2015 and look forward to each one as it comes out.
This time, Correia recruited a great series of authors – including Jim Butcher of The Dresden Files – to write short stories in the same universe.
These stories ranged from amusing to great so I’d definitely recommend it if you like the series.
The Monster Hunter Files (Monster Hunters International Book 7).

Vox Day I’ve been reading Vox Day in some shape or form since 2003 or so

One of the things I learned early is that he calls his shots and is rarely wrong.
When a few friends passed along Jordan Peterson in early 2017, I noted it and moved on.
When a bunch of people did, I finally watched the Cathy Newman interview and was impressed but still didn’t dig in.
Then I started reading Vox’s criticism and started paying attention.
Once again, Vox is right.
Don’t get angry, get informed.
Jordanetics: A Journey into the Mind of Humanity’s Greatest Thinker.

The Four Horsemen Universe After reading Ringo’s Troy Rising trilogy (below)

I started getting recommendations for various space adventure books and started reading the Four Horsemen series.
Unlike most of the books in this list, the series is written by a small flock of authors so while they started off okay, some authors have riffed in fun directions fleshing out the major players and the universe.
While I’ve enjoyed all of them so far, Winged Hussars is my favorite but the characters in Golden Horde are the most intriguing.
Cartwright’s Cavaliers by Mark Wandrey (The Revelations Cycle, Book 1).
Asbaran Solutions by Chris Kennedy (The Revelations Cycle, Book 2).

Winged Hussars by Mark Wandrey (The Revelations Cycle

Book 3).
The Golden Horde by Chris Kennedy (The Revelations Cycle, Book 1).
A Fistful of Credits by various authors (The Revelations Cycle, Book 5).
Peacemaker by Kevin Ikenberry (The Revelations Cycle, Book 6).
Marion G Harmon I read a bunch of the “Wearing the Cape” series back in 2015 and while I enjoyed the beginning, I felt like he pushed it a little far with crossing over to other worlds and other overused tropes.
I picked up this one and was pleasantly surprised and I feel like he had the story back.
It was tight and compelling but still lighthearted in the right places.
Recursion (Wearing the Cape Book 7).
Ben Horowitz Entrepreneurship is all about failure – both big and small – and this details many of Horowitz’s hardest, ugliest times and the people around him each step of the way.
Reading some of it was gut-wrenching but made a number of things make sense in startups in general and Okta specifically.
Overall, it was a good read.

The Hard Thing about Hard Things

Hugh Howey I loved Howey’s “Silo” series and looked forward to these short stories

While there were a couple great ones – one set in the Silo universe, one about virtual worlds – on average they were uninteresting and didn’t really fit into the overarching theme.
Skip this one.
Machine Learning (short stories).
Alan Janney This series has been my guilty pleasure the last few years.
It started with an super-virus infected teenager joining the football team, became mutant tigers who eat people whole, led to the worldwide collapse of civilization, and culminated in this book to bring most of the threads together.
This has been a great ride and lots of fun.
Further, I’ve traded notes with the author who’s given me some great book recommendations to date.
Lesson learned: Say “thanks!” to the people who create great things that you appreciate.
Wrath & Tears: The Conclusion (Carmine Book 3).
John O’Brien I really enjoyed O’Brien’s series about a zombie outbreak and the small group who comes together to survive.
In this one, the end isn’t just near.
it’s happening in real time throughout the series.
Lifting the Veil: Fallen – This is the first of a series telling the story of Revelation switching between Heaven’s and Earth’s point of view.

Lifting the Veil: Winter – In this one

we get a first person perspective into every nuance and detail of the disaster that has hit earth.
While it was okay, it was significantly weaker than the first.
made even worse by the opening note where O’Brien warns you that it’s weaker.
If you know it’s not good, make it better!.

John Ringo Being a fan of Monster Hunter series

I was intrigued by Larry Correia allowing another author to play in his universe.
Unlike the others, this one is written as a memoir from a monster hunter in the 1980s with a “boots on the ground” perspective that also fills in some mythology from the series.
Each of these have been fantastic and all three made my Top 5 above. Don’t read these until you get through Monster Hunter Alpha.
Monster Hunter Memoirs: Grunge.
Monster Hunter Memoirs: Sinners – This one is a tough read.
It’s not because it’s poorly written, it’s just the opposite.
By the time you get here, you know the characters and appreciate them so reading their pain stings.
I can’t say more without spoiling it.
Monster Hunter Memoirs: Saints – This one is my favorite Monster Hunter book of all across both the main series and this one.

After reading “Monster Hunter Memoirs: Grunge,” I picked up a couple of his other books

The first one was a zombie apocalypse that started off okay and quickly went off the rails.
I stopped reading when the “security” team attended a midnight concert in mid-collapse Central Park and the 13yo daughter went Mary Sue killing dozens of zombies.
Skip this one.

I gave Ringo one more shot and picked up Live Free or Die

I am so happy I did.
In the opening pages, humans experience first contact with a benign species who grows bored with them.
and then the second species shows up, wipes out a few cities, and demands tribute.
That’s roughly chapter one and it gets fascinating and amusing from there.
The first book is almost 100% centered around one character while the second and third broaden the point of view to a variety of characters on both sides of increasing interstellar tensions.
Live Free or Die (Troy Rising Book 1).
Citadel (Troy Rising Book 2).
The Hot Gate (Troy Rising Book 3).
John C.
Wright Last year I read a couple things from Wright and frankly fell in love with his writing style.
Usually when a story introduces bizarre terminology, it’s used to cover up poor storytelling so characters can explain “common” concepts to each other for the reader’s benefit.
That’s poor storytelling.
Wright uses it to drive things forward and foreshadow challenges coming for the characters.
Superluminary: The Lords of Creation – This one follows a junior member of a royal family that has learned science beyond our understanding.
Superluminary: The Space Vampires – After the last book, the royal family is at war with space vampires.
Yes, it sounds silly but it’s a great story told from the command perspective where the last of life is fighting back.
Notice I said “life” and not humanity.
Superluminary: The World Armada – This is the final book of the trilogy and while it was still filled with some bizarre concepts, the way he tied it all together throughout and by the end was fantastic.
I didn’t see the end coming until it was there.
If I included a #6 top book for the year, it would have to be one of these.
All links above are Amazon affiliate links.
If you think I missed something great, drop me a note and let me know.
Write a Reply or Comment Cancel reply.
Your email address will not be published.
Message *.
My Reading List (2018).

With support for Windows XP now officially ended

Primary Menu.
Windows 7.
Windows 7.
April 23.

2014 I’ve been using Windows 7 on my main PC for a little over a year now

and despite my earlier (bad) experiences and some hiccups along the way, I’m very happy with it.
With support for Windows XP now officially ended.

And the possibility of future Windows XP patches extremely doubtful

it’s time for other holdouts like me to finally upgrade.
If you’re sick of Windows, you can try alternatives like Zorin OS or Chrome OS.

Masochists among you might want to try Windows 8

but I really can’t recommend that.

Your best bet in terms of making an easy transition is definitely Windows 7

In the remainder of this article, I’ll provide helpful warnings and suggestions for easing the transition from Windows XP to Windows 7.
This is a ‘living’ document; I will add to it as I learn more.
Availability.

The first problem with upgrading to Windows 7 is availability

Search for Windows 7 in the Microsoft Store

and you’ll be redirected to Windows 8 resources.
Stores no longer have retail Windows 7 packages on their shelves.
Microsoft has officially stated that Windows 7 OEM packages will no longer be available after February 2015.
Until then, you can still buy Windows 7 from online retailers like NewEgg.
Unfortunately, it ain’t cheap.

Expect to pay up to $200 for OEM Windows 7 Professional

Don’t buy any of the Home versions, since they are missing important features.
The Ultimate version is probably overkill for most users.
To upgrade or not to upgrade?.
If you will be installing from OEM media, .

Then you don’t have the option to upgrade from Windows XP

Which is just as well, since the upgrade approach inevitably leads to problems.

Keep your existing Windows XP installation

Beyond the obvious need to back up your data before wiping your hard drive and installing Windows 7, there is a very real possibility that without a complete copy of your hard drive, you’ll lose something in the transition.
Hard drives are relatively inexpensive now, so I suggest one of the following approaches: Mount your old drive externally.
Buy yourself an external USB hard drive enclosure.

Make sure it supports your hard drive: since you’re upgrading from Windows XP

your computer may have an older IDE drive.
Newer computers use SATA drives.
There’s a useful comparison over at diffen.com.
Remove your old hard drive (the one with Windows XP) and mount it in the external enclosure.
Buy a new hard drive (again, making sure it’s the right type) and install it in your PC.
Install Windows 7 on the new hard drive.
With this setup, you’ll have a new installation of Windows 7 but will still have access to all your files, .

Folders and even settings (with some work) from your old Windows XP install

Image your old drive.
Another approach is to make an image of your old drive, copying it to another hard drive.
You can save space by creating a compressed image file of your old drive, but that makes accessing your old files more difficult, so I recommend copying the drive as-is.
Making Windows 7 more comfortable to use.

Anyone upgrading from Windows XP to Windows 7 will notice a few changes

Some of these changes are good, and some of them are annoying.
There are ways around most of the latter.
More to follow… Original article (posted July 2, 2012) is below.
Windows 7 is almost a good operating system.  There’s a ton of new stuff for power users, which is usually enough to get me to switch.  But my many attempts to switch from XP to 7 have always ended up with me switching back to XP.
Sure, some of the early problems with Windows 7 have been resolved.  And there are workarounds for most of the rest.  But there are still some issues that I just can’t get past.
One thing that bothers me about Windows 7 is the increased height of the taskbar.  Why is it taller than in XP?  And why can’t I change it?  Who decides these things, and what was the rationale.
Update: I found the setting for this with help from a friendly redditor.
A more serious concern is the networking in Windows 7, which causes many applications to run noticeably slower than XP across networks.
Mysterious and intermittent problems with getting Windows 7 and XP to talk across networks are also common.
I’ll post links to Windows 7 issues here, along with solutions and workarounds.  When a particular issue needs more than a few words of explanation, I’ll write up a post about it and link to the post here.
I’ll almost certainly be switching some of my machines from XP to 7 in the near future, but I’m going to hold off until I can find ways around all these issues.  By then maybe Windows 8 will be available, but then again, Windows 8 looks even worse….
Categories Select Category *nix  (4) Adobe  (179) Adware  (11) Android  (11) Anonymity  (1) Apple  (9) Boot13  (5) Chrome  (239) Cloud  (3) Cortana  (1) Crapware  (2) DRM  (1) Edge  (67) Email  (6) Firefox  (149) Flash  (187) Google  (243) Hacking  (3) Hardware  (40) Humour  (2) Internet  (96) Internet crime  (75) Internet Explorer  (156) IoT  (9) Java  (77) JavaScript  (3) Linux  (17) Mac  (13) Malware  (79) Microsoft  (384) Miscellany  (5) Mobile  (15) Mozilla  (55) Opera  (84) Patches and updates  (897) Privacy  (70) QuickTime  (1) Resources  (1) SANS  (1) Security  (804) SEO  (1) Shockwave  (19) Silverlight  (12) Social Media  (2) Spam and scams  (36) Things that are bad  (53) Things that make me happy  (5) Thunderbird  (10) Tools  (32) Uncategorized  (6) User Interface (UI)  (2) Vivaldi  (30) WiFi  (2) Windows  (409)    Vista  (4)    Windows 10  (89)    Windows 7  (37)    Windows 8.x  (96)    Windows XP  (28) WordPress and other CMS  (60) Archives Select Month August 2020  (4) July 2020  (4) June 2020  (3) May 2020  (3) April 2020  (5) March 2020  (7) February 2020  (8) January 2020  (5) December 2019  (4) October 2019  (2) September 2019  (7) Aug.
from · · · As predicted, Australia”s idiotic new tax on news aggregators that send traffic to news sites is going to make things much worse.
from · · · Ah, Nintendo.
Could we hate you any more than we do already.
I”m not sure, but your lawyers certainly want to find out.
from · · · The Devonian extinction event (50% of all genera died off) 359 million years ago may have been caused by a supernova 65 light years from Earth.
If that happened today it would really suck.
from · · · from · · · Ten years on, France”s ridiculous anti-piracy effort, Hadopi, has been nothing more than a misguided waste of public money.
from · · · Publishers: we love libraries.
Also publishers: we really really hate libraries, and people should pay to use them anyway.
from · · · Techdirt”s complete dismantling of the ridiculous ”Letter on Justice and Open Debate” published by Harper”s Magazine is required reading.
from · · · The new social media service Parler claimed it wouldn”t censor content, distinguishing it from Twitter.
They lied.
from · · · Here”s what you need to know about Section 230 Of the USA”s Communications Decency Act.
You know, the one that”s being blamed for everything bad on the Internet.
from · · · Subscribe © 2020 Jeff Rivett Consulting.

Monitoring Ubuntu Linux from LoadRunner (RSTAT)

Tag: LoadRunner.
, , , Monitoring Ubuntu Linux from LoadRunner (RSTAT).
Over the years I’ve used RSTAT to monitor the performance of Linux servers during performance tests many times.
Historically I’ve asked Linux admins to enable RSTAT for me but over the last year or two I’ve incorporated various Linux machines into my test and demo environment so I’ve had to do this myself.
I’m primarily a Microsoft specialist so I always seem to rely on Google searches and some trial and error to resolve problems that I encounter with Linux and this weekend was no exception.
I spotted a FaceBook post from Scott Moore describing problems that he was having with LoadRunner and RSTAT so I decided to see if I could help him.
I installed RSTAT on my Ubuntu test machine at home and immediately saw the dreaded error message “Error while creating the RPC client.
Ensure that the machine can be connected and that it runs the rstat daemon (use rpcinfo utility for this verification).” Whatever I tried, I could’t get RSTAT to start.
After some ‘Googling’, I found a number of posts in various places which helped me to resolve this problem.
A post on StackExchange (related to NFS issues) told me that rpcbind may have a dependency on a package called nfs-common.
I installed this “just in case”.
The same post told me that to start STATD (which I think is related to RSTAT) automatically at boot, .

I needed to add “NEED_STATD=yes” to the file “/etc/default/nfs-common”

A  post on GadgetWiz.com told me that I should edit my /etc/hosts.allow file to ensure that the local host could make rstatd requests.
After restarting my Ubuntu PC, I checked that RSTATD was running using the commands “rsysinfo localhost” and “rup localhost”.
I was pleased to see that after making these changes, .

It was possible to monitor my Ubuntu machine using LoadRunner

I repeated my test on a new Ubuntu 16.04 LTS machine which I monitored using LoadRunner 12.53. A list of the commands that I used is below: Install rstatd and related components: sudo apt-get install rstatd rstat-client nfs-common Add a line to /etc/hosts.allow to allow certain hosts to make rstatd requests: rpc.rstatd: localhost Added a line to /etc/default/nfs-common to start STATD automatically: NEED_STATD=yes These commands confirm that RSTATD is running: rpcinfo -p localhost rsysinfo localhost rup localhost November 15, 2016October 8, 2019 , , , , RSTAT, Ubuntu 4 Comments on Monitoring Ubuntu Linux from LoadRunner (RSTAT) Adding JavaScript functions to LoadRunner.
After reading Boris Kozorovitzky’s blog article.

“How to use JavaScript in your HP LoadRunner scripts”

I’ve been inspired to experiment with LoadRunner 12.
Historically, most LoadRunner functions have been written in C but many performance testers (me included) aren’t particularly fond of C and prefer to use more up to date languages.
I was already aware that there are some great JavaScript libraries out there which can extend the functionality of LoadRunner, but I hadn’t tried them until now.

Boris’ article describes how to integrate JavaScript code into your C-based scripts

Following his guidance I developed a script which uses the DateJS function for date calculations.
I’m impressed with the flexibility of DateJS and the extra capabilities that it offers over the standard LoadRunner date/time functions.
To incorporate the DateJS functionality into my script I did the following: Turned on the “Enable running JavaScript code” function.

Run-Time Settings > Internet Protocol > Preferences > Set advanced options > Options

Added my date.js code into the script using Solution Explorer

Right-click “Extra Files”node in Solution Explorer then choosing the option to “Add files to script”.
Adding JS functions to the script is then achieved by using the LoadRunner web_js_run function.
e.g.

JavaScript example in LoadRunner

I added my sample script to my GitHub repository so that other people can see just how easy this is and potentially benefit from the new date/time functions in this sample LoadRunner script.
You can download the script here: https://github.com/richardbishop/LoadRunnerSamples/tree/master/DateJS December 15, 2014October 8, 2019 , , JavaScript, 2 12 Next page.

the  (HGAC) visited the Center for Global Health’s

Search.
Tag Archives: online communities.
Older posts.
I’m working with  on a new academic journal.
It’s called the International Journal on Innovations in Online Education (IJIOE) and will be published by.
The Aim and Scope of the Journal is provided below.
Take care,.
, , , , , , , , , , , , , , ,.
If you’re looking for a powerful system that allows you to easily change in-world surface images/textures, make live modifications and add customized web-links within your -based , then you might be interested in a new product from ReactionGrid.
It’s called ACES, and it’s now  Imagine changing the image textures of storefronts, buildings, any surface at all in your Jibe world.
If you have any questions, please feel free to email me at.
Take care,.
, , , , , , , , ,.
is an alliance of scientists, educators and entrepreneurs from various fields who gather twice a month in for.
They’re a very creative and engaging group, and I was honored to be recently invited to  Here’s a summary of my topic: Below is a full video of my presentation, and you can find Take care,.
, , , , , , , , , , , ,.
The  (OSCC) is an annual conference that focuses on the developer and user community creating the.
Organized as a joint production by and the , the virtual conference features two days of presentations, workshops, keynote sessions, .

And social events across diverse sectors of the OpenSimulator user base

, and I’m thrilled to be both attending and presenting again this year.
All the inworld venue tickets are sold out, but if you can still  and watch all the presentations live on.
“You Only Own what you can Carry: How to backup and move your content between Second Life, Openim and Unity” In this hands-on workshop, I’ll be demonstrating exactly how to export your own user-created objects (both prim and mesh based) and move them between , and.
Attendees will watch my desktop via a live TeamViewer screenshare and follow along on their own using freely-available software.
Requirements: Inworld attendees should be using the  and have pre-installed both the  and the  No previous technical expertise required, just a willingness to learn.
I’ll also be a panelist later in the day on “” where I’ll be sharing my thoughts about DRM versus content licensing.
Take care,.
, , , , , , , , , , , , , , , , , , ,.
The  is being held September 18-20 in Bethesda, Maryland.
This conference will assess a wide range of progressive ideas for the future of e-Learning, focusing on the idea of technology as a means to education rather than an end in itself.
The conference organizers have lined up a wonderful range of interdisciplinary speakers and are planning to attract a wide group of heterogeneous scholars and practitioners.
I’ll be attending the entire conference, and I’m honored to be giving the    Here’s what I’ll be talking about: My goal will be to tell an interesting story with examples and demos of technologies that I think really leverage how our minds naturally embrace the world around us.
One such technology that I’m currently exploring and that you’ve probably never heard of are  Visit to learn a lot more about Wiglets.
are autonomous, evolving, self-animated and self-motivated agents that can exist in both completely virtual and augmented reality environments.
They exist at a wildly creative intersection of artificial life, art and gaming.
And perhaps best of all, you can interact with them directly through touch and gestures.
Another topic of discussion will be the affordances of multiuser 3d virtual worlds, especially how one can reduce the barrier to entry for people interested in leveraging them for educational purposes.
has recently developed some new tools that integrate with the Unity3d-based  to provide on-the-fly content editing in a simple yet powerful way.  I’ll be giving a sneak preview during my presentation.
I’ll also be discussing and giving examples of innovative uses of commonly used virtual world technologies such as ,  and the.
If you plan on attending and would like to connect with me at the conference, please drop me a line on or .  And if you’re looking to interact with the organizers and other attendees and speakers, be sure to  Take care,.
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,.
I’ll be attending these two upcoming conferences.
If you’re planning to attend either of them or if you just happen to be in town when they occur, please  if you’d like to meet up and chat about learning in virtual worlds.
The main aims of this conference are to increase our understanding of experiential learning in virtual worlds, both formal and informal, to share experiences and best practices, and to debate future possibilities for learning in virtual worlds.
For full details,  In my current position at  and my previous work at  and , I have explored the use of a wide range of gaming and virtual world platforms to augment education.
Today there are a number of very interesting virtual world technological trends involving specific gaming technologies like  as well as the growth of Open Source platforms such as.
My ongoing work involves finding the right match between educational goals and technological affordances as well as identifying key synergies when virtual world technologies are interwoven with existing social media and web-based educational content.
I’m particularly thrilled about this panel because I’ll be participating with  from the University of Arizona.
Bryan is a true pioneer in using virtual worlds for experiential learning, and he’s been working with virtual environments since his dissertation project in 1997 when he created a virtual simulation of Harlem, NY as it existed during the 1920s Jazz Age and Harlem Renaissance.
was one of the earliest full virtual reality environments created for use in the humanities and certainly one of the first for use in an African American literature course.
The project continues to grow and evolve as Bryan explores new virtual world platforms.
This new conference will assess a wide range of progressive ideas for the future of e-Learning, focusing on the idea of technology as a means to education rather than an end in itself.
The conference organizers are lining up a wonderful range of interdisciplinary speakers and are planning to attract a wide group of heterogeneous scholars and practitioners.
For full details,  I’ll be  at this conference.
And if you’re looking to interact with the organizers and other attendees and speakers, be sure to.
, , , , , , , , ,.
UPDATE 4/12/2015 – Unfortunately, it looks like this directory system has now been shut down.
But fortunately I’ve found an even better live hypergrid directory.
Check out Two of my biggest challenges when exploring  regions across the multitude of grids have always been: 1) finding places where people are currently visiting and 2) not wasting time trying to connect to places that are offline.
And over the years, there have been commendable efforts to manually create lists of Hypergrid-connected places (e.g., ) as well as strong work to create networked inworld devices (e.g., ).

All this work has been wonderful and very helpful to the growth of the Hypergrid

Which is why I’m very excited by the new  created by.
He’s nailed all of those features right out of the gate.
There aren’t many regions listed right now since the system is brand new and opt-in, but it’s incredibly easy to join and therefore could grow very quickly.

To get your own Hypergrid-connected region included in the list

you just rez an object on your region which phones home to the iDreamsNet website and immediately creates an entry for your region.
You are given a special link where you can go edit your listing (add photo, descriptive text, tags, website) and, over time, this object communicates back to the iDreamsNet website to let it know if your region is currently online and how many people are currently on it.
With the recent , there’s no need anymore to complicate Hypergrid directories with grid coordinates or “upper, middle, lower” categories.
Now, .

Anyone can jump from any Hypergrid location to any other Hypergrid location

We just need a simple, automated and powerful directory.
Take care,.
, , , , , , , , , , ,.
Yesterday, the  (HGAC) visited the Center for Global Health’s.
About 20 of us made the voyage, initially gathering at and then travelling together as a group.
This was the first HGAC trip in a long time, and it was wonderful to see so many familiar friends as well as some brand new faces.
, , , , , , , , , , , , , ,.
If you’re a fan of the , you should definitely check out the new  for and.
Fixed a problem with long teleports in OpenSim (“4096 bug” ) (Latif).
Latif Khalifa has fixed the bug that, , .

Prevented Hypergrid explorers from jumping to places more than 4096 regions away

No more mandatory intermediate hops.
No more “cannot reach destination – too far away” messages.

I encourage all explorers of the Hypergrid to please take a moment and

His hard work has resulted in a major improvement to the use of the Hypergrid and the evolution of OpenSim as a constellation of easily accessible interconnected grids.
Which brings me to the topic of the.
Since my , .

I’ve received a great deal of interest in possibly restarting our tours of the Hypergrid

Many people reached out to me, and the outpouring of interest was very inspiring.
So I’m rebooting the tours.
Our next tour will be Saturday Sept 28 at 10pm EDT.
For all the details, please.
Take care,.
, , , , , , , , , , , , ,.
This past weekend I attended and spoke at the very first (OSCC13).
It was an amazing event full of outstanding presentations, great networking opportunities, and spectacular venues with tons of attendees.
It was also truly remarkable to see how far has evolved and matured as a virtual world platform.
Lastly, for those of you interested in me possibly restarting the tours (I got a lot of positive feedback at the conference), be sure to  If I see enough interest, I’ll definitely start them up again.
Please read on for my own presentation summary, video and downloadable slides.
You can also watch recordings of all the other presentations in the ADDENDUM 9/10/2013: Be sure to read this blog post: “.” It’s an outstanding summary of the conference by Crista Lopes, the inventor of the and one of the conference’s main organizers.
Take care,.
, , , , , , , , , , , , , , , , , , , Older posts Follow.
cawag98 on on Jim Wojno on mercalia on on.
Post to bloggers like this:.