My work's IS policies have rendered my iPad useless

We cannot access our M365 accounts except in 1st party apps. The end. No matter the device. No matter the OS. The only ‘way around’ is to use the browser versions, which can be done in any browser.

I did, at one point, set up an automation (using Flow or whatever it was called) that connected to a Google Calendar to replicate my work appointments out, but it was worse than using Shortcuts and I simply gave up on it.

1 Like

Oh, this is only on the College-issued MacBook Pro. They leave personally-owned devices alone.

1 Like

BBEdit is a macOS app not an iOS app. So you are also locked out on macOS??

You should investigate the policy at your university for the case where information on your personal computer that is university related is compromised in some way. In the US especially, anything related to student records is often subject to the highest levels of lockdown. In a nutshell, you do not have the presumed right to be able to mess with such stuff on a “personal” computer or with “personal” software.

OTOH, when the computer or software was purchased through a contract or grant, the university has an inherent if not explicit obligation to assure that you are not locked out from using said computer or software.


JJW

2 Likes

I’m happy using browser versions.

Flow is definitely clunky.

They do not leave my personally-owned iPad alone. That’s what I’m lamenting, but am slowly resigned to accepting.

You are right. I also do some of the regex things in Drafts. My mistake.

100% agree. And hence I never put patient or student information on my laptop. That stays sequestered in the university/hospital cloud. I’m talking about mundane things like setting up a meeting (can’t use a template) or arranging projects (can’t cut and paste from Obsidian) or taking notes on academic papers. Also, I’ve bought all my own software. I even bought Office, but the University version overrode that.

There is just no easy way to prevent the employee from going further than that, and to make sure, that there is no employee who steal for some reason the companies data.
So, as they could not control which data might be OK, and which not, they just shut down everything.

1 Like

Five years ago, I would have said, “Good for him.” Now, it’s really out of IT’s domain and in Legal’s domain. Institutions have to do what they have to do to maintain their cybersecurity insurance. Windows users have had to deal with this for years. With the rise of MDM solutions and the lack of a need to bind a Mac to domain, now it’s our turn.

From what I’ve seen or heard, JAMF and WorkspaceONE have pretty good Software Centers/App Stores that allow for vetting applications to be installed or updated without local admin access. Though that doesn’t help with the “oddball” applications some of us like the use.

I should have mentioned this earlier. Microsoft’s apps allow your workplace to wipe their data from your device, whereas they can’t do that with other apps.

This is ironic, don’t you think? That Windows is both the culprit and the solution?

BTW the company from which I am retired made the same decision at least 30 years ago.

Sometimes. It really depends on who the head of security is and how draconian they decide their policies need to be. Security tools, policies, and measures are mainly aimed at Windows PCs, and the tools to manage Macs, if they exist at all, are often a poorly designed afterthought. I’ve seen places that disable the ability to set your own wallpaper. Why? Because they could. Because when you give someone a switch to flip they are going to want to flip that switch. They might even be able to justify it, but mostly they want control.

But how effective are security measures like this? And how much of it is really just security theater? Let’s take the pasteboard control on the op’s post for example. They are attempting to protect against two types of data loss there, intentionally malicious actions on the part of the user, and unintentional loss due to actions outside their control. For the first, it’s going to be extremely difficult to actually be able to prevent a person from stealing data they already have access to. You might be able to close off the most obvious pathways out of the organization, but if someone has decided to get data they have access to somewhere it’s not supposed to be, they’ll figure out a way to do it. (Do you disable writing to disk? How about screenshots? How about screen recordings? What if someone records the screen with another device? Etc…)

But what about the second scenario, protecting against unintentional loss? This solution presupposes that Microsoft applications are the safest and most secure applications on the market, which, given their history, I think anyone would be hard-pressed to accept this as fact. Microsoft has gotten better with security, but personally I don’t trust them as far as I can throw them (as much as you can throw software?). What’s really going on here a lot of the time is that if/when something does happen, the security folks can say “We did everything we could to prevent this!”

In my experience, the single most effective measure for increasing the security of an organization is education. Especially on recognizing and appropriately dealing with phishing or other social engineering attempts. Like a lot of us in this forum, I care a lot about the tools I use to do my job, the machine I sit in front of for hours every day. I don’t have a problem with security teams doing their job, I have a problem when that job becomes more CYA than actual security, and their CYA starts encroaching on my workflow.

2 Likes

I work in Information Security and Data Protection and I rail against the use of the word draconian. I also disagree with the idea this is all on the Head of Security. They’ll make suggestions or proposals. But someone has to sign off on those. I’ve often found that some of the organisations which go way over the top are doing so as a kneejerk reaction to an Information Security incident.

Everything is built around the risks to the organisation. For example, at the moment, Ransomware and account takeovers are massive risks to all organisations. This means that allowing local admin is really bad practice. People open attachments and click on email links without even thinking, even the security offered by Multi Factor Authentication has been shown to worked around due to deception and Human error.

As someone else in this thread has already said, some of these requirements don’t even come from the organisation, but are as a result of a contract entered into, whether this is an agreement with a customer or with a supplier (e.g. the organisation’s insurance company.) Yes, these agreements are entered into willingly, but if you want Cyber Insurance, you have to do these things to reduce the risk.

Do these things make people’s jobs harder, sometimes, yes. Should the company do more to provide tools to allow more efficient operation, probably also yes. Should people be allowed to use whatever software they want to do the job? Not in an organisation which manages risk properly, otherwise you end up with an overworked information security team who spend so much time assessing new tools, that they can’t make a difference, or one which has so many unknown vulnerabilities that information is leaking out of it and no-one knows.

2 Likes

Wouldn’t strict restrictions push people to use less secure methods to get their daily work done?

For example, if I need to copy-and-paste things to colleagues (eg, a table from a journal article or my notes from a conference I paid for) then I’d have to send that through Slack or some cloud based service. Or we start using Google Docs instead of presumably more secure enterprise Office365.

It can happen, but if “caught” breaking the rules it can lead to disciplinary action.

I can’t see what that should ever be the case though. With robust processes, software which allows the job to be done should be procured.

What causes this is onourous processes.

I feel for you.

My IT department decided to install two, competing, security packages on every managed Mac. And with “competing” I don’t mean two different vendors, but software that really was interfering with the other package, causing high CPU loads and the fans blowing almost all the time…

The same IT department blocked Monterey for almost a year, because one of these two tools was not updated for Monterey…

Great information from @ibuys and @wweber and @geoffaire. Thank you.

I imagine a legal team at a university weighing the differences between having to spend significantly more money in insurance to cover a violation versus letting IT take significantly more flak after implementing a security lockdown. I don’t envy IT, caught in the middle.

The lockdown on administrator accounts has only started where I am. I anticipate, once this requirement is set in stone, it will raise comments and outcries from our faculty. The comments and outcries will likely mirror what is covered here. One of my hats is being a representative on our Faculty Senate. Perhaps the best step is to request that faculty have a better education on the reasons behind the approach, following on @ibuys. And for the sake of sanity, faculty and IT have to work together to assure that security measures do not become edicts simply because they are the easiest ones for the legal department to choose.


JJW

2 Likes

That is horrible. I’ve seen competing AV packages fight over the same resources. It’s not pretty.

Unless the computer or software does not comply with current security policy and procedures, now influenced by zero-trust security models (see below).

The “territory” these days is zero trust security. Zero-trust architecture assumes that no matter how robust a company’s external defenses are, hackers can get in. Companies need to make sure that even users inside a network can’t do serious damage. This is a Very Big Deal in security circles these days, for good reason. Links that explain in more detail:

WSJ Journal Zero Trust article (safe link, hopefully not behind a paywall):
https://www.wsj.com/articles/cyberattacks-hacking-lapsuss-zero-trust-okta-uber-rockstar-11663969967?st=aj56t38ugc5ex1w&reflink=desktopwebshare_permalink

From Okta, a leading identity-security provider:
https://www.okta.com/blog/2019/01/what-is-zero-trust-security/

1 Like

Or of course, where failure to apply appropriate policies would be against laws or regulations, and with GDPR in the UK and the Europe Union, many Security best practices are seen as minimum standards. In the US, privacy laws are a little more piecemeal, or apply to certain data like HIPAA.

When I started in I.T. there wasn’t even a name for the job I was doing. We were using DEC terminals connected to a local midrange and an out of state IBM mainframe. Later when we started buying PCs they were running DOS and there was no way to lock them down. Until we started running Windows NT keeping our computers running and virus free was an almost daily job of whack-a-mole. Removing the users ability to install software and make changes reduced problems by 90+ %.

Until I retired a few years ago I had a no admin privileges policy everywhere I worked. It was never a major problem because we worked with our users to make sure they had everything they needed. Including the software, etc. they preferred, when possible. And on a few occasions I gave a few users an admin account on their local equipment.

As others have said these day you may have no choice. For example if you take credit cards in the U.S. you will either comply with the Payment Card Industry standards or you won’t be permitted to accept CC payments. IMO it takes I.T. and users working together to get the job done.

2 Likes

PCI is a good analogue. Some companies comply with it as obtusely/restrictively as possible. Others approach it with a deeper knowledge of their systems, and use tools that reduce the quantity and replication of in-house sensitive data. That second approach requires better people and a psychologically safe organization. (And a better insurance provider–that is a classic “hands tied, won’t try to figure out how to untie them, sorry!” situation.)

These IT organizations are not incentivized to minimize the opportunity cost of their policies unless the organization is minimally dysfunctional. And they are often staffed with people who aren’t even aware of the opportunity costs, and who will point to a policy on paper that says the right tool can be used without acknowledging the power of defaults and their inability to understand and process a use case, which is necessary to support the most effective workflows for specialized ICs.

So thankful to work for smaller organizations that prioritize hiring trustworthiness and professionalism and equip accordingly.

I do not disagree. But …

One tenet presented in least-privilege access is that access is (only) granted to software needed to do the job. (my modifications and emphasis added)

The OP has software that is needed to do his job. Access to that software has been shut down.

Where does this place the onus to determine whether the needed software complies with zero-trust? Should the OP be responsible to prove to IT that the software he needs is fully zero-trust compliant before he can get permission to use it? Or should the IT department be responsible to prove that software the OP needs for his job violates zero-trust in some way before they can disallow it being used?

From a faculty perspective, a respectable middle ground could be to charge IT with the responsibility to vet software in a timely manner using threat metrics that are pertinent and not preemptively overburdening.


JJW

Are there Mac-using attorneys that work in the cyber insurance/information security space out there? This could be an interesting episode topic.