Interests: programming, video games, anime, music composition

I used to be on kbin as e0qdk@kbin.social before it broke down.

  • 0 Posts
  • 92 Comments
Joined 1 year ago
cake
Cake day: November 27th, 2023

help-circle
  • e0qdk@reddthat.comtoFediverse@lemmy.worldKarma in lemmy?
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 days ago

    I’m under the impression the reputation points are either the combined number of upvotes or that minus downvotes

    IIRC from kbin – and assuming mbin didn’t change things – boosts counted for two points while upvotes (favorites) are one point and downvotes (reduces) are one point. Boosts are basically retweets, IIRC, and wouldn’t be coming from lemmy users – just from Mastodon, mbin, and other tools that support it.

    Edit: To clarify, I mean downvotes reduce by one point.










  • As someone who watches gaming footage on PeerTube, I’ve mostly interacted with single creator instances – i.e. either the creator themselves is self-hosting it or it’s run by a fan as a non-YT backup of their Twitch/Owncast/whatever VODs. Those instances generally do not allow anyone else to upload.

    Discoverability sucks but the way I’ve found them is by using SepiaSearch and looking for specific words from game titles. I imagine the way most other people find them is that they already know the content creator from Twitch and want to find an old VOD that isn’t archived on YT (e.g. because of YT’s bullshit copyright system) – but that’s just a guess.



  • ABOUT THIS REPORT

    This Report by the U.S. Copyright Office addresses the legal and policy issues related to artificial intelligence (“AI”) and copyright, as outlined in the Office’s August 2023 Notice of Inquiry (“NOI”).

    The Report will be published in several Parts, each one addressing a different topic. This Part addresses the copyrightability of works created using generative AI. The first Part, published in 2024, addresses the topic of digital replicas—the use of digital technology to realistically replicate an individual’s voice or appearance. A subsequent part will turn to the training of AI models on copyrighted works, licensing considerations, and allocation of any liability. To learn more, visit www.copyright.gov/ai.

    Emphasis mine. So, probably have to wait for Part 3 or 4 or whatever.


  • Here’s the bullet point summary of findings from page iii for anyone who doesn’t want to go digging through the PDF:

    Based on an analysis of copyright law and policy, informed by the many thoughtful comments in response to our NOI, the Office makes the following conclusions and recommendations:

    • Questions of copyrightability and AI can be resolved pursuant to existing law, without the need for legislative change.
    • The use of AI tools to assist rather than stand in for human creativity does not affect the availability of copyright protection for the output.
    • Copyright protects the original expression in a work created by a human author, even if the work also includes AI-generated material.
    • Copyright does not extend to purely AI-generated material, or material where there is insufficient human control over the expressive elements.
    • Whether human contributions to AI-generated outputs are sufficient to constitute authorship must be analyzed on a case-by-case basis.
    • Based on the functioning of current generally available technology, prompts do not alone provide sufficient control.
    • Human authors are entitled to copyright in their works of authorship that are perceptible in AI-generated outputs, as well as the creative selection, coordination, or arrangement of material in the outputs, or creative modifications of the outputs.
    • The case has not been made for additional copyright or sui generis protection for AI- generated content.

    The Office will continue to monitor technological and legal developments to determine whether any of these conclusions should be revisited. It will also provide ongoing assistance to the public, including through additional registration guidance and an update to the Compendium of U.S. Copyright Office Practices.




  • It’s surprising that there doesn’t seem to be an obvious way in the UI to just see a list of creators/channels on a local instance. So, that’s the first thing I’d change to improve discoverability.

    The way I currently find relevant content is by going to Sepia Search, putting in exact words that I think are likely to be in the title of at least one video on a channel that would likely also have a lot of other relevant content, and then going through that channel’s playlists. Those searches often lead me to single user instances with only one or two channels (e.g. a channel that has a backup of that user’s YouTube content and a channel with a backup of their Twitch or OwnCast or whatever streams). When it leads me to a generalist instance or one with a relevant subject/theme though, I’ve had little luck finding content from anyone else unless they’ve posted recently (compared to other users). Often the content that is most relevant to me is not what is newest but the archives from years ago. (New content is relevant though once I want to follow someone in particular, but it’s not what I want to see first.)

    Another issue I’ve encountered is with the behavior of downloaded videos. I greatly appreciate that PeerTube provides a URL for direct download, and I prefer to watch videos in my own player downloaded in advance (so I can watch offline; pause and resume trivially after putting my computer to sleep; etc). H264 MP4 works fine for this, but the download seems to be some sort of chunked variant of it (for HLS?) which requires the player to read in the entire file to figure out the length or seek accurately. Having to wait a minute or two to be able to seek each time I open a large video file off my HDD is an irritating papercut. I suspect there’s likely a way to fix it by including an index in the file (or in a sidecar file) but I don’t know how to do it – short of re-encoding the entire video again which I’d rather not do since it both takes a long time and can result in quality loss. (EDIT ffmpeg -i input.mp4 -vcodec copy -acodec copy -movflags faststart output.mp4 repacks the video quickly.) This usually doesn’t affect newly added videos (where the download link includes the pattern /download/web-videos and a warning is shown that it’s still being transcoded) but does when that’s done (the URL includes /download/streaming-playlists/hls/videos instead); so, this is something that happens as a result of PeerTube’s reprocessing.

    Downloads from the instances that I’ve found to be most relevant to me are also pretty unreliable (connection is slow and drops a lot), so I use wget with automatic retries (and it sometimes still needs manual retries…) rather than downloading through my browser which tends to fail and then often annoyingly start over completely if I request a retry… It would be really nice if I could check that I’ve downloaded the file correctly and completely with a sha256 hash or something.




  • what is the legitimate use case?

    You do a whole bunch of research on a subject – hours, days, weeks, months, years maybe – and then find something that sparks a connection with something else that you half remember. Where was that thing in the 1000s of pages you read? That’s the problem (or at least one of the problems) it’s supposed to solve.

    I’ve considered writing similar research tools for myself over the years (e.g. save a copy of the HTML and a screenshot of every webpage I visit automatically marked with a timestamp for future reference), but decided the storage cost and risk of accidentally embarrassing/compromising myself by recording something sensitive was too high compared to just taking notes in more traditional ways and saving things manually.


  • It’s an absolute long-shot, but are there any careers that feel like the research part of grad school, but without the stuff that’s miserable about it (the coursework and bureaucracy)?

    There’s no getting away from the bureaucracy, but it is possible to get career positions in academia – and I don’t mean as a professor, either. Check your university’s job site. If they’re big, they almost certainly have one. Get to know your professors too, and make sure they’re aware of the things you’re good at (even beyond your immediate subject area if you have additional hobbies/interests/skills) so they can help you find a landing place if things don’t work out where you are. If you’re willing to do programming – even if you don’t like it – there is a hell of a lot of stuff that needs to be done in academia, and some of it pays enough to live on. It’s possible to carve out a niche and evolve a role into a mix of stuff that you’re good (enough) at but dislike, and stuff that you like but which doesn’t necessarily always have funding if there’s some overlap…