* Add a limit to how many posts can get fetched as a result of a single request
* Add tests
* Always pass `request_id` when processing `Announce` activities
---------
Co-authored-by: nametoolong <nametoolong@users.noreply.github.com>
Some 7.x ElasticSearch versions support some 6.x nodes, thus the version check
is inadequate. I am not sure there is a good way to check if a server
implements all the 7.x APIs, so check server version and minimum wire version
instead.
* fix(status): remove send usage for private unlink_from_conversations
- make unlink_from_conversations public method
- rename unlink_from_conversations to unlink_from_conversations!
- fix send call on private method in statuses_vacuum and batched_remove_status_service
* fix(feeds_vacuum): replace find_in_batches with in_batches
because active record query results should be a little more efficient than
itterating with map and each. Postgres can grasp such lists of ids much quicker
than ruby can.
Will probably make allmost no difference, but cannot hurt either.
* Fix home tl contains post from who blocked me
* Add test
* Fix feed_manager's build_crutches
blocked_by was not includes status' owner
* Add test for status from I blocked
* Fix typo
Nil unwrap causes the admin dashboard to crash/500 when the Chewy client info version number value is nil.
This occurs when running another ES-compatible backend such as MeiliSearch.
Obviously it would be good for chewy to recognise upstream but at least avoiding the crash would be fine.
* Fix trying to fetch posts from other users when fetching featured posts
* Rate-limit discovery of new subdomains
* Put a limit on recursively discovering new accounts
* refactor(statuses_vacuum): remove dead code - unused
Method is not called inside class and private.
Clean up dead code.
* refactor(statuses_vacuum): make retention_period present test explicit
This private method only hides functionality.
It is best practice to be as explicit as possible.
* refactor(statuses_vacuum): improve query performance
- fix statuses_scope having sub-select for Account.remote scope by
`joins(:account).merge(Account.remote)`
- fix statuses_scope unnecessary use of `Status.arel_table[:id].lt`
because it is inexplicit, bad practice and even slower than normal
`.where('statuses.id < ?'`
- fix statuses_scope remove select(:id, :visibility) for having reusable
active record query batches (no re queries)
- fix vacuum_statuses! to use in_batches instead of find_in_batches,
because in_batches delivers a full blown active record query result,
in stead of an array - no requeries necessary
- send(:unlink_from_conversations) not to perform another db query, but
reuse the in_batches result instead.
- remove now obsolete remove_from_account_conversations method
- remove_from_search_index uses array of ids, instead of mapping
the ids from an array - this should be more efficient
- use the in_batches scope to call delete_all, instead of running
another db query for this - because it is again more efficient
- add TODO comment for calling models private method with send
* refactor(status): simplify unlink_from_conversations
- add `has_many through:` relation mentioned_accounts
- use model scope local instead of method call `Status#local?`
- more readable add account to inbox_owners when account.local?
* refactor(status): searchable_by way less sub selects
These queries all included a sub-select. Doing the same with a joins
should be more efficient.
Since this method does 5 such queries, this should be significant,
since it technically halves the query count.
This is how it was:
```ruby
[3] pry(main)> Status.first.mentions.where(account: Account.local, silent: false).explain
Status Load (1.6ms) SELECT "statuses".* FROM "statuses" WHERE "statuses"."deleted_at" IS NULL ORDER BY "statuses"."id" DESC LIMIT $1 [["LIMIT", 1]]
Mention Load (1.5ms) SELECT "mentions".* FROM "mentions" WHERE "mentions"."status_id" = $1 AND "mentions"."account_id" IN (SELECT "accounts"."id" FROM "accounts" WHERE "accounts"."domain" IS NULL) AND "mentions"."silent" = $2 [["status_id", 109382923142288414], ["silent", false]]
=> EXPLAIN for: SELECT "mentions".* FROM "mentions" WHERE "mentions"."status_id" = $1 AND "mentions"."account_id" IN (SELECT "accounts"."id" FROM "accounts" WHERE "accounts"."domain" IS NULL) AND "mentions"."silent" = $2 [["status_id", 109382923142288414], ["silent", false]]
QUERY PLAN
------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=0.15..23.08 rows=1 width=41)
-> Seq Scan on accounts (cost=0.00..10.90 rows=1 width=8)
Filter: (domain IS NULL)
-> Index Scan using index_mentions_on_account_id_and_status_id on mentions (cost=0.15..8.17 rows=1 width=41)
Index Cond: ((account_id = accounts.id) AND (status_id = '109382923142288414'::bigint))
Filter: (NOT silent)
(6 rows)
```
This is how it is with this change:
```ruby
[4] pry(main)> Status.first.mentions.joins(:account).merge(Account.local).active.explain
Status Load (1.7ms) SELECT "statuses".* FROM "statuses" WHERE "statuses"."deleted_at" IS NULL ORDER BY "statuses"."id" DESC LIMIT $1 [["LIMIT", 1]]
Mention Load (0.7ms) SELECT "mentions".* FROM "mentions" INNER JOIN "accounts" ON "accounts"."id" = "mentions"."account_id" WHERE "mentions"."status_id" = $1 AND "accounts"."domain" IS NULL AND "mentions"."silent" = $2 [["status_id", 109382923142288414], ["silent", false]]
=> EXPLAIN for: SELECT "mentions".* FROM "mentions" INNER JOIN "accounts" ON "accounts"."id" = "mentions"."account_id" WHERE "mentions"."status_id" = $1 AND "accounts"."domain" IS NULL AND "mentions"."silent" = $2 [["status_id", 109382923142288414], ["silent", false]]
QUERY PLAN
------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=0.15..23.08 rows=1 width=41)
-> Seq Scan on accounts (cost=0.00..10.90 rows=1 width=8)
Filter: (domain IS NULL)
-> Index Scan using index_mentions_on_account_id_and_status_id on mentions (cost=0.15..8.17 rows=1 width=41)
Index Cond: ((account_id = accounts.id) AND (status_id = '109382923142288414'::bigint))
Filter: (NOT silent)
(6 rows)
```
* fixes ArgumentError when proxy is used
* Update app/lib/request.rb
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
Co-authored-by: Eugen Rochko <eugen@zeonfederated.com>
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
Switch to monkey-patching http.rb rather than a runtime extend of each
response, so as to avoid busting the global method cache. A guard is
included that will provide developer feedback in development and test
environments should the monkey patch ever collide.
- Improve tests
- Fix possible crash when application of a reblogged post isn't set
- Fix discrepancies around favourited and reblogged attributes
- Fix discrepancies around pinned attribute
- Fix polls not being hydrated
* Change “Translate” button to only show up when a translation backend is configured
Fixes#19346
* Add `translation` attribute to /api/v2/instance to expose whether the translation feature is enabled
Fixes#19328
* Change public accounts pages to mount the web UI
* Fix handling of remote usernames in routes
- When logged in, serve web app
- When logged out, redirect to permalink
- Fix `app-body` class not being set sometimes due to name conflict
* Fix missing `multiColumn` prop
* Fix failing test
* Use `discoverable` attribute to control indexing directives
* Fix `<ColumnLoading />` not using `multiColumn`
* Add `noindex` to accounts in REST API
* Change noindex directive to not be rendered by default before a route is mounted
* Add loading indicator for detailed status in web UI
* Fix missing indicator appearing while account is loading in web UI
* Move ActivityPub::FetchRemoteAccountService to ActivityPub::FetchRemoteActorService
ActivityPub::FetchRemoteAccountService is kept as a wrapper for when the actor is
specifically required to be an Account
* Refactor SignatureVerification to allow non-Account actors
* fixup! Move ActivityPub::FetchRemoteAccountService to ActivityPub::FetchRemoteActorService
* Refactor ActivityPub::FetchRemoteKeyService to potentially return non-Account actors
* Refactor inbound ActivityPub payload processing to accept non-Account actors
* Refactor inbound ActivityPub processing to accept activities relayed through non-Account
* Refactor how Account key URIs are built
* Refactor Request and drop unused key_id_format parameter
* Rename ActivityPub::Dereferencer `signature_account` to `signature_actor`
* Add a more descriptive PrivateNetworkAddressError exception class
* Remove unnecessary exception class to rescue clause
* Remove unnecessary include to JsonLdHelper
* Give more neutral error message when too many webfinger redirects
* Remove unnecessary guard condition
* Rework how “ActivityPub::FetchRemoteAccountService” handles errors
Add “suppress_errors” keyword argument to avoid raising errors in
ActivityPub::FetchRemoteAccountService#call (default/previous behavior).
* Rework how “ActivityPub::FetchRemoteKeyService” handles errors
Add “suppress_errors” keyword argument to avoid raising errors in
ActivityPub::FetchRemoteKeyService#call (default/previous behavior).
* Fix Webfinger::RedirectError not being a subclass of Webfinger::Error
* Add suppress_errors option to ResolveAccountService
Defaults to true (to preserve previous behavior). If set to false,
errors will be raised instead of caught, allowing the caller to be
informed of what went wrong.
* Return more precise error when failing to fetch account signing AP payloads
* Add tests
* Fixes
* Refactor error handling a bit
* Fix various issues
* Add specific error when provided Digest is not 256 bits of base64-encoded data
* Please CodeClimate
* Improve webfinger error reporting
* Add model for custom filter keywords
* Use CustomFilterKeyword internally
Does not change the API
* Fix /filters/edit and /filters/new
* Add migration tests
* Remove whole_word column from custom_filters (covered by custom_filter_keywords)
* Redesign /filters
Instead of a list, present a card that displays more information and handles
multiple keywords per filter.
* Redesign /filters/new and /filters/edit to add and remove keywords
This adds a new gem dependency: cocoon, as well as a npm dependency:
cocoon-js-vanilla. Those are used to easily populate and remove form fields
from the user interface when manipulating multiple keyword filters at once.
* Add /api/v2/filters to edit filter with multiple keywords
Entities:
- `Filter`: `id`, `title`, `filter_action` (either `hide` or `warn`), `context`
`keywords`
- `FilterKeyword`: `id`, `keyword`, `whole_word`
API endpoits:
- `GET /api/v2/filters` to list filters (including keywords)
- `POST /api/v2/filters` to create a new filter
`keywords_attributes` can also be passed to create keywords in one request
- `GET /api/v2/filters/:id` to read a particular filter
- `PUT /api/v2/filters/:id` to update a new filter
`keywords_attributes` can also be passed to edit, delete or add keywords in
one request
- `DELETE /api/v2/filters/:id` to delete a particular filter
- `GET /api/v2/filters/:id/keywords` to list keywords for a filter
- `POST /api/v2/filters/:filter_id/keywords/:id` to add a new keyword to a
filter
- `GET /api/v2/filter_keywords/:id` to read a particular keyword
- `PUT /api/v2/filter_keywords/:id` to edit a particular keyword
- `DELETE /api/v2/filter_keywords/:id` to delete a particular keyword
* Change from `irreversible` boolean to `action` enum
* Remove irrelevent `irreversible_must_be_within_context` check
* Fix /filters/new and /filters/edit with update for filter_action
* Fix Rubocop/Codeclimate complaining about task names
* Refactor FeedManager#phrase_filtered?
This moves regexp building and filter caching to the `CustomFilter` class.
This does not change the functional behavior yet, but this changes how the
cache is built, doing per-custom_filter regexps so that filters can be matched
independently, while still offering caching.
* Perform server-side filtering and output result in REST API
* Fix numerous filters_changed events being sent when editing multiple keywords at once
* Add some tests
* Use the new API in the WebUI
- use client-side logic for filters we have fetched rules for.
This is so that filter changes can be retroactively applied without
reloading the UI.
- use server-side logic for filters we haven't fetched rules for yet
(e.g. network error, or initial timeline loading)
* Minor optimizations and refactoring
* Perform server-side filtering on the streaming server
* Change the wording of filter action labels
* Fix issues pointed out by linter
* Change design of “Show anyway” link in accordence to review comments
* Drop “irreversible” filtering behavior
* Move /api/v2/filter_keywords to /api/v1/filters/keywords
* Rename `filter_results` attribute to `filtered`
* Rename REST::LegacyFilterSerializer to REST::V1::FilterSerializer
* Fix systemChannelId value in streaming server
* Simplify code by removing client-side filtering code
The simplifcation comes at a cost though: filters aren't retroactively
applied anymore.
* Change RSS feeds
- Use date and time for titles instead of ellipsized text
- Use full content in body, even when there is a content warning
- Use media extensions
* Change feed icons and add width and height attributes to custom emojis
* Fix custom emoji animate on hover breaking
* Fix tests
* Fix PeerTube videos appearing with an erroneous “Edited at” marker
PeerTube videos have an `updated` field equal to `published`.
When processing an incoming activity that has the same value for `updated` and
`published`, assume this doesn't represent an actual edit.
* Please CodeClimate
* Fix error responses in `from` search prefix (addresses mastodon/mastodon#17941)
Using unsupported prefixes now reports a 422; searching for posts from an
account the instance is not aware of reports a 404. TODO: The UI for this
on the front end is abysmal.
Searching `from:username@domain` now succeeds when `domain` is the local
domain; searching `from:@username(@domain)?` now works as expected.
* Remove unused methods on new Error classes as they are not being used
Currently when `raise`d there are error messages being supplied, but
this is not actually being used. The associated `raise`s have been
edited accordingly.
* Remove needless comments
* Satisfy rubocop
* Try fixing tests being unable to find AccountFindingConcern methods
* Satisfy rubocop
* Simplify `from` prefix logic
This incorporates @ClearlyClaire's suggestion (see
https://github.com/mastodon/mastodon/pull/17963#pullrequestreview-933986737).
Accepctable account strings in `from:` clauses are more lenient than
before this commit; for example, `from:@user@example.org@asnteo +cat`
will not error, and return posts by @user@example.org containing the
word "cat". This is more consistent with how Mastodon matches mentions
in statuses. In addition, `from` clauses will not be checked for
syntatically invalid usernames or domain names, simply 404ing when
`Account.find_remote!` raises ActiveRecord::NotFound.
New code for this PR that is no longer used has been removed.
* Change e-mail notifications to only be sent when recipient is offline
Change the default for follow and mention notifications back on
* Add preference to always send e-mail notifications
* Change wording
* Fix edits with no actual changes being allowed locally
* Fix edits with no actual changes being allowed through ActivityPub
* Fix false positive changes caused by description processing in model
* Fix not recording poll expiration update
* Fix test
* Revert changes to ProcessStatusUpdateService
* Various fixes and improvements
* Fix code style issues
* Various changes and improvements
* Add guard clause
* Change how changes to media attachments are stored for edits
Fix not being able to re-order media attachments
* Fix not broadcasting updates when polls/media is changed through ActivityPub
* Various fixes and improvements
* Update app/models/report.rb
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
* Add tracking of media attachment description changes
* Change poll in status edit to have a structure closer to the real one
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
* Change design of federation pages in admin UI
* Fix query performance in instance media attachments measure
* Fix reblogs being included in instance languages dimension
* Add `/api/v1/accounts/familiar_followers` to REST API
* Change hide network preference to be stored consistently for local and remote accounts
* Add dummy classes to migration
* Apply suggestions from code review
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
* Add trending statuses
* Fix dangling items with stale scores in localized sets
* Various fixes and improvements
- Change approve_all/reject_all to approve_accounts/reject_accounts
- Change Trends::Query methods to not mutate the original query
- Change Trends::Query#skip to offset
- Change follow recommendations to be refreshed in a transaction
* Add tests for trending statuses filtering behaviour
* Fix not applying filtering scope in controller
Video files with variable framerates are converted to constant framerate videos
and the output framerate picked by ffmpeg is based on the original file's
container framerate (which can be different from the average framerate).
This means that an input video with variable framerate with about 30 frames per
second on average, but a maximum of 120 fps will be converted to a constant 120
fps file, which won't be processed by other Mastodon servers.
This commit changes it so that input files with VFR and a maximum framerate
above the framerate threshold are converted to VFR files with the maximum frame
rate enforced.
* Fix structured data parsing from links choking on bad data
- Fix og:url meta tag being prioritized over canonical link tag
- Fix structured data parsing choking on commented-out CDATA declarations
- Fix HTML entities in title, description, provider_name, author_name
- Change structured data parsing to attempt every JSON-LD script tag
* Remove unnecessary slash escapes from CDATA regex pattern
* Fix Sidekiq warnings about JSON serialization
This occurs on every symbol argument we pass, and every symbol key in hashes,
because Sidekiq expects strings instead.
See https://github.com/mperham/sidekiq/pull/5071
We do not need to change how workers parse their arguments because this has
not changed and we were already converting to symbols adequately or using
`with_indifferent_access`.
* Set Sidekiq to raise on unsafe arguments in test mode
In order to more easily catch issues that would produce warnings in production
code.
* Add support for editing for published statuses
* Fix references to stripped-out code
* Various fixes and improvements
* Further fixes and improvements
* Fix updates being potentially sent to unauthorized recipients
* Various fixes and improvements
* Fix wrong words in test
* Fix notifying accounts that were tagged but were not in the audience
* Fix mistake
* Add trending links
* Add overriding specific links trendability
* Add link type to preview cards and only trend articles
Change trends review notifications from being sent every 5 minutes to being sent every 2 hours
Change threshold from 5 unique accounts to 15 unique accounts
* Fix tests
For some reason, some misconfigured servers return an empty document when
queried over webfinger. Since an empty document does not lead to a parse
error, the error is not caught properly and triggers uncaught exceptions
later on.
This PR fixes that by immediately erroring out with `Webfinger::Error` on
getting an empty response.
* Switch from unmaintained paperclip to kt-paperclip
* Drop some compatibility monkey-patches not required by kt-paperclip
* Drop media spoof check monkey-patching
It's broken with kt-paperclip and hopefully it won't be needed anymore
* Fix regression introduced by paperclip 6.1.0
* Do not rely on pathname to call FastImage
* Add test for ogg vorbis file with cover art
* Add audio/vorbis to the accepted content-types
This seems erroneous as this would be the content-type for a vorbis stream
without an ogg container, but that's what the `marcel` gem outputs, so…
* Restore missing for_as_default method
* Refactor Attachmentable concern and delay Paperclip's content-type spoof check
Check for content-type spoofing *after* setting the extension ourselves, this
fixes a regression with kt-paperclip's validations being more strict than
paperclip 6.0.0 and rejecting some Pleroma uploads because of unknown
extensions.
* Please CodeClimate
* Add audio/vorbis to the unreliable set
It doesn't correspond to a file format and thus has no extension associated.
The auto-linking code basically rewrote the whole string escaping non-ascii
characters in an inefficient way, and building a full character offset map
between the unescaped and escaped texts before sending the contents to
TwitterText's extractor.
Instead of doing that, this commit changes the TwitterText regexps to include
valid IRI characters in addition to valid URI characters.
* Fix Delete and Create-related locks expiring too fast
Fixes#16238
By default, RedisLock expires after 10 seconds, which may not be enough to
process statuses, especially when those have attached media files.
This commit extends those 10 seconds to 15 minutes, which should be plenty
enough to handle any status, while being short enough to not waste many
sidekiq job retries in the exceedingly rare case in which a sidekiq process
would crash when processing a `Create` or `Delete`.
* Fix other RedisLock autorelease durations
Fixes#15645
- things that only perform a few simple database queries (e.g. finding and
saving a record) have been left unchanged, so they'll still use the default
10s duration
- things that perform significantly more complex database queries have been
changed to a 5 minutes timeout
- things that perform multiple HTTP queries have been changed to a 15 minutes
timeout
If a status with a hashtag becomes very popular, it stands to
reason that the hashtag should have a chance at trending
Fix no stats being recorded for hashtags that are not allowed
to trend, and stop ignoring bots
Remove references to hashtags in profile directory from the code
and the admin UI
* Fix media processing getting stuck on too much stdin/stderr
See thoughtbot/terrapin#5
* Remove dependency on paperclip-av-transcoder gem
* Remove dependency on streamio-ffmpeg gem
* Disable stdin on ffmpeg process
* Add tests
* Ensure deleted statuses are marked as such
* Save some redis memory by not storing URIs in delete_upon_arrival values
* Avoid possible race condition when processing incoming Deletes
* Avoid potential duplicate Delete forwards
* Lower lock durations to reduce issues in case of hard crash of the Rails process
* Check for `lock.aquired?` and improve comment
* Refactor RedisLock usage in app/lib/activitypub
* Fix using incorrect or non-existent sender for relaying Deletes
* Update devise-two-factor to unreleased fork for Rails 6 support
Update tests to match new `rotp` version.
* Update nsa gem to unreleased fork for Rails 6 support
* Update rails to 6.1.3 and rails-i18n to 6.0
* Update to unreleased fork of pluck_each for Ruby 6 support
* Run "rails app:update"
* Add missing ActiveStorage config file
* Use config.ssl_options instead of removed ApplicationController#force_ssl
Disabled force_ssl-related tests as they do not seem to be easily testable
anymore.
* Fix nonce directives by removing Rails 5 specific monkey-patching
* Fix fixture_file_upload deprecation warning
* Fix yield-based test failing with Rails 6
* Use Rails 6's index_with when possible
* Use ActiveRecord::Cache::Store#delete_multi from Rails 6
This will yield better performances when deleting an account
* Disable Rails 6.1's automatic preload link headers
Since Rails 6.1, ActionView adds preload links for javascript files
in the Links header per default.
In our case, that will bloat headers too much and potentially cause
issues with reverse proxies. Furhermore, we don't need those links,
as we already output them as HTML link tags.
* Switch to Rails 6.0 default config
* Switch to Rails 6.1 default config
* Do not include autoload paths in the load path
* Prepare Mastodon for zeitwerk autoloader (Rails 6)
Add inflections and rename/move a few classes.
In particular, app/lib/exceptions.rb and app/lib/sanitize_config.rb
were manually loaded while still in autoload paths.
* Add inflection for Url → URL
* Fix misuse of foreign_type
* Fix use of removed "add_template_helper"
* Use response.media_type instead of response.content_type in tests
* Fix CSV export controller test on Rails 6
Rails 6 sets a "filename*" field in the Content-Disposition header to
explicitly encode the filename as UTF-8.
This changes checks the first part of the Content-Disposition header so
it matches in both Rails 5 and Rails 6.
* Fix emoji formatting with Rails 6
* Make emoji output more idiomatic and robust
* Switch from redis-rails gem to built-in Rails redis cache storage
* Update twitter-text from 1.14 to 3.1.0
* Disable emoji parsing
* Properly depend on twitter-text for url detection
* Fix some URLs being wrongly detected client-side
* Add test for server-side validation of non-autolinkable URLs
* Fix server-side status length counting
* Fix URI of repeat follow requests not being recorded
In case we receive a “repeat” or “duplicate” follow request, we automatically
fast-forward the accept with the latest received Activity `id`, but we don't
record it.
In general, a “repeat” or “duplicate” follow request may happen if for some
reason (e.g. inconsistent handling of Block or Undo Accept activities, an
instance being brought back up from the dead, etc.) the local instance thought
the remote actor were following them while the remote actor thought otherwise.
In those cases, the remote instance does not know about the older Follow
activity `id`, so keeping that record serves no purpose, but knowing the most
recent one is useful if the remote implementation at some point refers to it
by `id` without inlining it.
* Add tests
Unlike locally-issued blocks, they weren't clearing follow
relationships in both directions, follow requests or notifications.
Co-authored-by: Claire <claire.github-309c@sitedethib.com>