Nostr account is planned
The blue check is free when you have a lot of followers with the blue check, we don't pay for Twitter.
I don't have the signal app and I find the Pixel 9 lasting days, and I use the battery limit feature.
The PR I mentioned is still open, if it comes back then the attached is probably the reason
Quoting this for other #GrapheneOS users, if you have battery drain then it's probably because you have Signal.
https://github.com/signalapp/Signal-Android/issues/13704
The hardened signal fork Molly is far more optimized and doesn't have this issue. A pull request was made to Signal to fix but they never merged it. Been ignored.
I've alerted this but note certain situations lets apps still run. I'm not able to reproduce it. Also note minimizing the app =/= closing the app. They're different, you must use the recents screen.
Apps have to go out of their way and have a prominent notification / remain a foreground service and that stops it. Apps and foreground services can also re-start other apps (that's why the disable feature exists)
Anyways CCing so everyone else can see: nostr:nprofile1qqs9g69ua6m5ec6ukstnmnyewj7a4j0gjjn5hu75f7w23d64gczunmgpz4mhxue69uhhyetvv9ujumt0wd68ytnsw43qz9thwden5te0v35hgar09ec82c30wfjkcctexf34p3
Revolut is specifically banning #GrapheneOS by checking for the build machine hostname and username being set to 'grapheneos'. We've changed these to build-host and build-user. Combined with another change, this allow our users to log in to it again until they roll out Play Integrity API enforcement.
There's no legitimate excuse for banning using a much more private and secure operating system while permitting devices with no security patches for a decade. Meanwhile, Revolut's shoddily made app tells users they're banning GrapheneOS because they're "serious about keeping your data secure".
Revolut's app will stop working against once they start enforcing having a Play Integrity API result showing it's a Google certified device. This is not a security feature but rather anti-competitive behavior from Google deployed by apps like Revolut wanting to pretend they care about security.
Revolut uses a bunch of shady closed source third party libraries in their app and it's one of these libraries banning GrapheneOS. These libraries are a major security risk and put user data at risk of being compromised. Revolut is not taking user security seriously at all and is cutting corners.
Android uses Clang type-based forward edge Control Flow Integrity (CFI) for the kernel and a subset of userspace. It isn't a high impact security feature. We used to have changes expanding userspace coverage but Android is already doing it and we moved this effort to higher impact work.
Unlike Chrome, we enable type-based forward edge CFI for our Vanadium browser to cover the default browser and WebView. Other than that, the usage of Clang CFI has the same scope as the stock Pixel OS and our focus is on higher impact areas. Expanding it causes regressions we have to address.
Unlike the stock Pixel OS, we enable branch target identification (BTI) to address holes in Clang CFI coverage in the kernel and the lack of full deployment in userspace. BTI is coarse grained CFI and is an extremely weak security feature but it's easy to enable and doesn't cause regressions.
Unlike the stock Pixel OS, we enable pointer authentication (PAC) return protection for userspace instead of only the kernel. Similar to BTI, this is easy to enable and doesn't cause regressions. Unlike the stock Pixel OS, we use Shadow Call Stack as an extra layer on top of PAC in the kernel.
Instead of working on expanding CFI coverage, our focus is on higher impact features including hardware memory tagging (MTE). We have a best-in-class implementation of MTE for heap protection in hardened_malloc and we deploy MTE for all but a single userspace process (camera HAL).
Our most recent release enabled MTE for the Linux kernel allocators too: https://grapheneos.org/releases#2025011500. We need to improve the kernel implementation to enforce deterministic guarantees with it as hardened_malloc does. We're also planning to deploy stack allocation MTE for both the kernel and userspace.
MTE directly uncovers memory corruption bugs which are often security bugs. Type-based CFI uncovers type mismatches which block deploying it but rarely have any direct security impact. These are major ongoing areas of work as software changes, not only for the initial deployment.
The recently published paper "Twenty years later: Evaluating the Adoption of Control Flow Integrity" has major inaccuracies due to flaws in their tools and methodology. They inaccurately claim we don't expand coverage of CFI and wrongly claim we reduce it on end-of-life 4th/5th gen Pixels.
4th/5th gen Pixels are end-of-life. We make it clear that our releases for them are unofficial extended support releases for harm reduction. Regardless, we do not reduce CFI coverage on them. We also don't increase the number of executable code as they claim but rather greatly reduce it.
They didn't look into what was actually happening and resulting in their tools reporting what they are. Their claims are likely based on the way we handle device support ending up placing the same executables and libraries in more locations. They also miss a whole lot including Chromium.
The paper also wrongly portrays largely ending up with the same end result for compiled code on devices where there aren't major differences such as PAC and BTI as us reusing the code from the stock OS. No, that's not what is happening. Android builds are reproducible. That's not reuse.
The paper describes Android 13 is the most recent release of Android and lists releases from April 2023 as what they analyzed, so there's also the issue of it being very outdated. This does not explain the inaccuracies, but does partially explain why it doesn't cover PAC and BTI.
There are other inaccuracies including about #GrapheneOS but we don't need to go through all of it point by point. We'll be contacting the authors asking for corrections but our expectations are low based on past experience. Posting this is our main way to address it.
Similar to ASLR, forward-edge type-based CFI exists to make exploiting memory corruption vulnerabilities harder in the general case. It protects specific data (function pointers and virtual function tables) from attacks and only outright blocks attacks in a small subset of cases.
Clang CFI is least powerful in the Linux kernel and browser engine due to major holes in what it protects and many alternatives to function pointers for taking over control. Unfortunately, those are 2 of the places in the most need of much better exploit protections.
Android initially deployed CFI in the media stack and that's still where they focus on it. It was then added for the whole kernel. They're gradually expanding it in userspace starting in higher impact areas. We don't see much value in us focusing on gradually bringing it to lower impact areas.
Turning CFI from a weak or mediocre protection largely acting as an annoyance for attackers into a strong protection would require additional compiler features and a massive overhaul to software. There are too many holes in these protections and other exploit targets. It's ultimately a dead end.
Rather than making enormous changes to protect or remove exploit targets bit by bit, we need a strong general purpose protection. Memory tagging provides that, with the major caveat that MTE is only 4 bits. A future version can be given the bits from PAC. A far future one can use fat pointers.
Memory tagging with fat pointers will be usable to bolt on low overhead memory safety to massive legacy software stacks and to protect against remaining holes in memory safe languages. It's too bad there's so much focus on piling on more and more weak mitigations instead of building solutions.
nostr:nprofile1qqstnr0dfn4w5grepk7t8sc5qp5jqzwnf3lejf7zs6p44xdhfqd9cgspzpmhxue69uhkummnw3ezumt0d5hszrnhwden5te0dehhxtnvdakz7qgawaehxw309ahx7um5wghxy6t5vdhkjmn9wgh8xmmrd9skctcnv0md0 do you know why recently a lot more apps crash without exploit protection compatibility mode?
Those apps used to work, but 2-3 upgrades ago it got a lot worse.
Are these native apps or web apps? Vanadium had recently been updated to fix a bug caught by memory tagging issues in a patch they did.
As for native apps, wouldn't be anything we've done as we haven't made any changes beyond providing an opt-in global switch for the other DCL restrictions.
Pixel 9 has very slight security benefits over the 8th generation, but not a gigantic leap compared to 7 to 8.
Pixel 9's modem has additional hardening: https://security.googleblog.com/2024/10/pixel-proactive-security-cellular-modems.html?m=1
Because the higher Linux kernel version, some Linux kernel security features like memory structure randomisation can be used. This is exclusively enabled in GrapheneOS:
An explanation on that is here: https://stacker.news/items/670170#randstruct-kernel-61-and-above
nostr:nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpqzafcms4xya5ap9zr7xxr0jlrtrattwlesytn2s42030lzu0dwlzqavmejr nostr:nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpqx458tl7h9xcxa66vr4a8pg0h2qz96pnhwnfpcra0le9090uk5t5qjlvqlu nostr:nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpqaxmrvkyyh6muj6jcd3md7j0zu0z78hlr0uh7huu5v5lgs98y0yksa2unmw nostr:nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpqa00wj229auzjswlq4s77y4u8eqdx5k9ppatgl8rtv8va65f6mwksq65r4q Worth noting we have access to the January Cellebrite Premium capabilities and it's not significantly different. They didn't figure out how to exploit GrapheneOS. Pixel 9 is very new and they didn't figure out how to exploit the stock OS likely due to Linux 6.1, but they should soon.
For users wondering why the Project hasn't posted new Cellebrite docs: We haven't made any updates regarding Cellebrite Premium due to there being little changes beyond new iPhone support. Pixel 9 is also unsupported on Stock OS exploitation currently, likely due to Pixel 9 being a different kernel branch than the older ones. Pixel 9 uses Linux 6.1. Likely to change in the coming months if their work rate is anything to go by.
No point in risking out to publish content with minimal changes that mean anything to GrapheneOS users. More interested in what other providers have to offer, or, how their Stock OS exploits work.
Profiles are used for these situations:
- You want to run a separate VPN configuration for apps.
- You need to separately encrypt certain data with different keys, and have them be placed at-rest even when the device (Owner profile) is in 'AFU'.
- You want to restrict inter process communications between apps.
- You want to run multiple unique instances of one app.
- You want a large group of app data and files you can delete in a button press but not factory reset the device.
Ideally, you want to use as little profiles as possible unless the purpose is for one of these five. You'll overcomplicate things otherwise by having to swap profiles all the time. Most people use a profile to separate apps for privacy invasive services they want to use or for any apps requiring sandboxed Google Play. Some may also use it to separate identities or online profiles. There isn't an ideal way, just things you can do to keep it sensible.
#GrapheneOS version 2025011500 released.
This update provides currently exclusive and early fixes for security bugs regarding the lock screen originating from the upstream stock Android. They do not affect contemporary users as they would not use such a unique accessibility configuration to allow this. These vulnerabilities also didn't allow bypassing the lock screen, rather certain restrictions on fingerprint unlocks. Early fixes like these help GrapheneOS stand out.
MTE has been extended to the kernel allocators, which significantly increases kernel security. We're also pushing kernel security patches earlier than anyone including the patch for CVE-2024-56556 (a High severity vulnerability).
Changes:
• fix upstream Android lockscreen triggered by the combination of fully disabling animations (via Settings > Accessibility > Color and motion > Remove animations) and enabling always-on display (Settings > Display > Lock screen > Always show time and info) which results in the locking process getting stuck and not considering the device locked until it wakes again due to not skipping animations as intended (this did not create a lockscreen bypass but did result in valid biometric unlock credentials skipping restrictions)
• add protection against upstream lockscreen bugs bypassing restrictions on biometric unlocking while the device is asleep including the standard restrictions and our recently added 2-factor fingerprint unlock feature
• kernel (Pixel 8, Pixel 8 Pro, Pixel 8a, Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL, Pixel 9 Pro Fold): enable hardware memory tagging for the main kernel allocators via the upstream Hardware Tag-Based KASAN implementation (which is intended for production usage, unlike the other KASAN modes)
• kernel (Pixel 8, Pixel 8 Pro, Pixel 8a, Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL, Pixel 9 Pro Fold): switch KASAN fault handling from report to panic to use it as a hardening feature instead of only a bug finding tool
• kernel (Pixel 8, Pixel 8 Pro, Pixel 8a, Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL, Pixel 9 Pro Fold): switch KASAN hardware memory tagging mode from synchronous to asymmetric for the initial deployment to reduce the performance cost and match our existing hardware memory tagging usage in userspace (synchronous mode is potentially more useful in the kernel than it is for userspace which is something we can investigate and potentially offer as an option)
• kernel (Pixel 8, Pixel 8 Pro, Pixel 8a, Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL, Pixel 9 Pro Fold): disable our slab canary feature since it's incompatible with the kernel's hardware memory tagging and will be obsolete after we've made basic improvements to the upstream hardware memory tagging implementation
• Updater: require TLSv1.3 instead of either TLSv1.2 or TLSv1.3
• kernel (5.10): update to latest GKI LTS branch revision including update to 5.10.233
• kernel (5.15): update to latest GKI LTS branch revision including update to 5.15.176
• kernel (5.15): merge latest GKI tag to incorporate important security and other patches including the patch for CVE-2024-56556 which are not included in the latest kernel.org release (5.15.176) or the latest GKI LTS branch revision
• kernel (6.6): update to latest GKI LTS branch revision
• Seedvault: update to a newer revision (will be replaced with a better backup implementation in the future)
• System UI Tuner: opt-out of Android 15 edge-to-edge since it's not properly supported yet (upstream bug)
• make eng builds more consistent with user/userdebug builds by extending the GrapheneOS additions of the ro.control_privapp_permissions=enforce, net.tethering.noprovisioning=true and ro.sys.time_detector_update_diff=50 system properties to all build variants
• show a system error notification for privileged permission allowlist violations in development builds (userdebug and eng builds) instead of breaking booting the device to make developing device support and porting to new OS versions easier
You can build emulator images via the build tutorial page. We don't provide any images to download though. Not all features would be fully available with it
Most of these features are to do with privileged features in Google Play Store (Play Protect) and not Android itself. Means nothing to GrapheneOS:
https://security.googleblog.com/2024/05/io-2024-whats-new-in-android-security.html?m=1
The "live threat detection" is just trivial antimalware scanning based on heuristics for known malicious behaviours of installed apps, such as asking for accessibility services or device admins and other unnecessary permissions. Google calling it "AI powered" is it at best disingenuous. People falling into scares for it would also be the same people falling for such corporatespeak.
The "unsafe connection detection" appears to be about the feature of detecting fake cellular base stations or when their cellular network connection is unencrypted. This is a good feature, but ideally you wouldn't want to be using the cell network at all.
The "side loading restrictions" are already a thing since Android 13. What this means is apps installed in certain ways (like directly from an APK) are considered unsafe installs and are automatically blocked from accessing certain dangerous permissions. Currently apps through modern app stores don't have this and if you downloaded it anyway you have to go through a semi-hidden dialog to activate their access.
A future Android update is adding enhancements the feature to provide a whitelist of sources where only apps from said sources can have such permissions without the ability to allow any from outside. This is enforced by an XML file in the system partition and so GrapheneOS would just change or not use it.
App devs should never use such permissions unless they are absolutely necessary if they care about user freedom because they are an attack surface risk and can be very dangerous. Accessibility services allow an app to make inputs on your behalf. This is also why Auditor detects when there is one in use and you can see in audit results.
Made an edit to the post a little as I think a typo made it not make sense. I should clarify the enhancement for the restricted apps setting is to replace allowing restricted settings for ANY app store with only allowing a list of allowed app stores.
It doesn't block using an app with those permissions though, and you can still use the previous dialog to enable restricted settings at your own heavy risk. When an app store is trusted apps installed on there just won't have the restricted setting blocker during first use.
Most of these features are to do with privileged features in Google Play Store (Play Protect) and not Android itself. Means nothing to GrapheneOS:
https://security.googleblog.com/2024/05/io-2024-whats-new-in-android-security.html?m=1
The "live threat detection" is just trivial antimalware scanning based on heuristics for known malicious behaviours of installed apps, such as asking for accessibility services or device admins and other unnecessary permissions. Google calling it "AI powered" is it at best disingenuous. People falling into scares for it would also be the same people falling for such corporatespeak.
The "unsafe connection detection" appears to be about the feature of detecting fake cellular base stations or when their cellular network connection is unencrypted. This is a good feature, but ideally you wouldn't want to be using the cell network at all.
The "side loading restrictions" are already a thing since Android 13. What this means is apps installed in certain ways (like directly from an APK) are considered unsafe installs and are automatically blocked from accessing certain dangerous permissions. Currently apps through modern app stores don't have this and if you downloaded it anyway you have to go through a semi-hidden dialog to activate their access.
A future Android update is adding enhancements the feature to provide a whitelist of sources where only apps from said sources can have such permissions without the ability to allow any from outside. This is enforced by an XML file in the system partition and so GrapheneOS would just change or not use it.
App devs should never use such permissions unless they are absolutely necessary if they care about user freedom because they are an attack surface risk and can be very dangerous. Accessibility services allow an app to make inputs on your behalf. This is also why Auditor detects when there is one in use and you can see in audit results.
DNSNet (Continued DNS66 fork - a ad/content blocker DNS) app on Accrescent now.
GrapheneOS version 2025010700 released:
https://grapheneos.org/releases#2025010700
See the linked release notes for a summary of the improvements over the previous release.
Forum discussion thread:
https://discuss.grapheneos.org/d/18831-grapheneos-version-2025010700-released
#GrapheneOS #privacy #security
New #GrapheneOS update with latest security patch level and some small changes. Haptic feedback has been added for the 2FA Fingerprint unlock.
Telegram has full access to all of the content of group chats and regular one-to-one chats due to lack of end-to-end encryption. Their opt-in secret chats use homegrown end-to-end encryption with weaknesses. Deleting the content from the app likely won't remove all copies of it.
Telegram has heavily participated in misinformation campaigns targeting actual private messaging apps with always enabled, properly implemented end-to-end encryption such as Signal. Should stop getting any advice from anyone who told you to use Telegram as a private messenger.
Telegram is capable of handing over all messages in every group and regular one-to-one chat to authorities in France or any other country. A real private messaging app like Signal isn't capable of turning over your messages and media. Telegram/Discord aren't private platforms.
A major example of how Telegram's opt-in secret chat encryption has gone seriously wrong before: https://words.filippo.io/dispatches/telegram-ecdh/.
The practical near term threat is for the vast majority of chats without end-to-end encryption: 100% of Telegram group chats and the regular 1-to-1 chats.