-
Notifications
You must be signed in to change notification settings - Fork 0
/
feed.json
174 lines (174 loc) · 46.7 KB
/
feed.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
{
"version": "https://jsonfeed.org/version/1",
"title": "Toto do stuff",
"description": "",
"home_page_url": "https://totetmatt.github.io",
"feed_url": "https://totetmatt.github.io/feed.json",
"user_comment": "",
"author": {
"name": "Totetmatt"
},
"items": [
{
"id": "https://totetmatt.github.io/twitter-streaming-importer-and-the-new-twitter.html",
"url": "https://totetmatt.github.io/twitter-streaming-importer-and-the-new-twitter.html",
"title": "Twitter Streaming importer and the New Twitter",
"summary": "Due to the current situation on Twitter API, a FLOSS Developer strike will be observed for the Gephi’s Twitter Streaming Importer plugin. This will stay…",
"content_html": "<p>Due to the current situation on Twitter API, a FLOSS Developer strike will be observed for the Gephi’s Twitter Streaming Importer plugin. This will stay like that until there is a positive change on the API and the platform in general.</p>\n<ul>\n<li>No more development, update, maintenance, support.</li>\n<li>Code itself to transform tweet to graph is not restricted. The API access is fully on the responsability of the Twitter company, and I can’t do things about it. Check with them.</li>\n<li>Plugin still available and should works if you have a working API Access.</li>\n<li>Code is open source and you are free to adapt it on new acquisition method.</li>\n</ul>\n<p>[Update from 08-07-2023]</p>\n<p>It’s getting worst so the strike still on. Few comments :</p>\n<ul>\n<li>Looks like the Access Policy has changed a lot. No guarantee that it still works on the plugin. Gephi and the Twitter Streaming Importer are <strong>completely independant and unrelated</strong> to Twitter company. Which mean even if you payed the access to the API, there is no guarantee from the Gephi team and the Twitter Streaming Importer team at all that the Plugin will work.</li>\n<li>Code still open source, you are <em>libre</em> to read it and adapt it as long as it respect the open source licence of the Twitter Streaming Importer.</li>\n</ul>\n",
"author": {
"name": "Totetmatt"
},
"tags": [
],
"date_published": "2023-04-27T19:59:36+02:00",
"date_modified": "2023-07-08T13:39:45+02:00"
},
{
"id": "https://totetmatt.github.io/network-graph-rendering-isopleths-with-gmic.html",
"url": "https://totetmatt.github.io/network-graph-rendering-isopleths-with-gmic.html",
"title": "Network Graph rendering : Isopleths with Gmic",
"summary": "Mathieu Jacomy is currently experimenting a type of graph rendering using a technic called “Hillshading”, a demo is accessible here https://observablehq.com/d/7d19c2d05caf9fb2 . The idea of…",
"content_html": "<p>Mathieu Jacomy is currently experimenting a type of graph rendering using a technic called “Hillshading”, a demo is accessible here <a href=\"https://observablehq.com/d/7d19c2d05caf9fb2\">https://observablehq.com/d/7d19c2d05caf9fb2</a> .\nThe idea of this concept is to add information to enhance the readability of the graph, especially when there is condensed clusters of nodes.</p>\n<p>The current script is working mostly in javascript with D3.js. I wanted to find a way to do it without any JS, locally in my computer. Then I remember a wonderful tool called <a href=\"https://gmic.eu/\">GMIC</a> that can do a lot of advanced image processing.</p>\n<p>To simplify the things, we are going to remove the shading part of the hillshading, which on my opinion isn’t the critical part of the entire process described by Mathieu Jacomy.\nAt the end, hillshading are just shaded <a href=\"https://en.wikipedia.org/wiki/Contour_line#Isopleths\">Isopleths</a> , the line you see in a map that indicate the altitude. I should be enough for the scope of the script we want to acheive.</p>\n<p>To use the script we need to have 2 applications :</p>\n<ul>\n<li><a href=\"https://gmic.eu/\">GMIC</a>, we need the CLI version of the tool. It’s available for Windows and Linux.</li>\n<li><a href=\"https://gephi.org/\">Gephi</a> , for generating graph.</li>\n</ul>\n<h2 id=\"export-network-graph\">Export Network Graph</h2>\n<p>Take any network graph you have, the final effect will wokr better on graph that has a lot of clusters.</p>\n<p>When you’re happy with your spatialisation in preview, we need to export in PNG 2 files with the following configuration:</p>\n<ul>\n<li><code>background.png</code> where only the Nodes are rendered, use the Preview Settings to remove the node labels,the edges and the edge labels. Export this file with a certains option : 4096x4096, <strong>no</strong> transparant background, 0% margin</li>\n<li><code>foreground.png</code> where you can render the node and the edges, (label might be rendered, but might occur small issue later, I will come back). Export this file with a certains option : 4096x4096, <strong>transparant background</strong>, 0% margin.</li>\n</ul>\n<p><figure class=\"post__image\"><img loading=\"lazy\" src=\"https://totetmatt.github.io/media/posts/51/background.png\" sizes=\"(max-width: 48em) 100vw, 768px\" srcset=\"https://totetmatt.github.io/media/posts/51/responsive/background-xs.png 300w ,https://totetmatt.github.io/media/posts/51/responsive/background-sm.png 480w ,https://totetmatt.github.io/media/posts/51/responsive/background-md.png 768w ,https://totetmatt.github.io/media/posts/51/responsive/background-lg.png 1024w ,https://totetmatt.github.io/media/posts/51/responsive/background-xl.png 1360w ,https://totetmatt.github.io/media/posts/51/responsive/background-2xl.png 1600w\" alt=\"Image description\" width=\"4096\" height=\"4096\" /></figure> Background\n<figure class=\"post__image\"><img loading=\"lazy\" src=\"https://totetmatt.github.io/media/posts/51/foreground.png\" sizes=\"(max-width: 48em) 100vw, 768px\" srcset=\"https://totetmatt.github.io/media/posts/51/responsive/foreground-xs.png 300w ,https://totetmatt.github.io/media/posts/51/responsive/foreground-sm.png 480w ,https://totetmatt.github.io/media/posts/51/responsive/foreground-md.png 768w ,https://totetmatt.github.io/media/posts/51/responsive/foreground-lg.png 1024w ,https://totetmatt.github.io/media/posts/51/responsive/foreground-xl.png 1360w ,https://totetmatt.github.io/media/posts/51/responsive/foreground-2xl.png 1600w\" alt=\"Foreground\" width=\"4096\" height=\"4096\" /></figure> Foreground</p>\n<h2 id=\"processing-with-gmic\">Processing with Gmic</h2>\n<p>Let’s concider you exported the png in the same directory, and <code>gmic</code> is accessible on your terminal, run this command : </p>\n<pre><code>gmic.exe background.png fx_stamp[-1] 1,100,0,30,0,1,1,0,50,50 fx_channel_processing[-1] 0,0,0,1.22,2,0,100,256,0,0,0,2,0,0,50,50 samj_Colored_Outlines[-1] 0,0,16,0,2,0,0,0,255 fx_channel_processing[-1] 0,0,100,0,0,0,100,256,0,1,0,2,0,0,50,50 output[-1] intermediate.png\n</code></pre>\n<p>Quick explanation : </p>\n<ul>\n<li><strong>fx_stamp</strong>: Convert the image to black and white and reverse it.</li>\n<li><strong>fx_channel_processing</strong>: Apply a blur, this somehow simulate a Kernel Density Estimation. To simplify, the blur processing tries to genreate a density approximation on every point of the map.</li>\n<li><strong>samj_Colored_Outlines</strong>: Create the isopleths. We could vugalirse saying it’s a quantization of the discrete density aproximation computed on the previous step.</li>\n<li><strong>fx_channel_processing</strong>: Convert to black and white image</li>\n<li><strong>output</strong>: Save the image to <code>intermediate.png</code></li>\n</ul>\n<p><figure class=\"post__image\"><img loading=\"lazy\" src=\"https://totetmatt.github.io/media/posts/51/intermediate-3.png\" sizes=\"(max-width: 48em) 100vw, 768px\" srcset=\"https://totetmatt.github.io/media/posts/51/responsive/intermediate-3-xs.png 300w ,https://totetmatt.github.io/media/posts/51/responsive/intermediate-3-sm.png 480w ,https://totetmatt.github.io/media/posts/51/responsive/intermediate-3-md.png 768w ,https://totetmatt.github.io/media/posts/51/responsive/intermediate-3-lg.png 1024w ,https://totetmatt.github.io/media/posts/51/responsive/intermediate-3-xl.png 1360w ,https://totetmatt.github.io/media/posts/51/responsive/intermediate-3-2xl.png 1600w\" alt=\"Image description\" width=\"4096\" height=\"4096\" /></figure> Intermediate</p>\n<p>Then run this script :</p>\n<pre><code>gmic.exe intermediate.png foreground.png +channels[-1] 100% +image[0] [1],0%,0%,0,0,1,[2],255 output[-1] output.png\n</code></pre>\n<p>Here the script only compile the <code>intermediate.png</code> and <code>foreground.png</code> as one final image.</p>\n<p><figure class=\"post__image\"><img loading=\"lazy\" src=\"https://totetmatt.github.io/media/posts/51/output.png\" sizes=\"(max-width: 48em) 100vw, 768px\" srcset=\"https://totetmatt.github.io/media/posts/51/responsive/output-xs.png 300w ,https://totetmatt.github.io/media/posts/51/responsive/output-sm.png 480w ,https://totetmatt.github.io/media/posts/51/responsive/output-md.png 768w ,https://totetmatt.github.io/media/posts/51/responsive/output-lg.png 1024w ,https://totetmatt.github.io/media/posts/51/responsive/output-xl.png 1360w ,https://totetmatt.github.io/media/posts/51/responsive/output-2xl.png 1600w\" alt=\"Image description\" width=\"4096\" height=\"4096\" /></figure></p>\n<h2 id=\"comments\">Comments</h2>\n<p>The process is still experimental, a lot of things may vary like the export size that asks to do reparametrization of the script.\nThere is also some limitation due to current behaviour of gephi. If the foreground exports with node label, it might generate a image not aligned with the background which break the global effect at the end.</p>\n<p>Out of that, the effect works well with networkt that has a certain critical mass of node density.\nHaving the edges on the foreground hides a little bit the iso lines.</p>\n<h2 id=\"some-other-experiments\">Some other experiments</h2>\n<p><figure class=\"post__image\"><img loading=\"lazy\" src=\"https://totetmatt.github.io/media/posts/51/world_border_final.png\" sizes=\"(max-width: 48em) 100vw, 768px\" srcset=\"https://totetmatt.github.io/media/posts/51/responsive/world_border_final-xs.png 300w ,https://totetmatt.github.io/media/posts/51/responsive/world_border_final-sm.png 480w ,https://totetmatt.github.io/media/posts/51/responsive/world_border_final-md.png 768w ,https://totetmatt.github.io/media/posts/51/responsive/world_border_final-lg.png 1024w ,https://totetmatt.github.io/media/posts/51/responsive/world_border_final-xl.png 1360w ,https://totetmatt.github.io/media/posts/51/responsive/world_border_final-2xl.png 1600w\" alt=\"Image description\" width=\"4096\" height=\"4096\" /></figure></p>\n<p><figure class=\"post__image\"><img loading=\"lazy\" src=\"https://totetmatt.github.io/media/posts/51/miserable_final.png\" sizes=\"(max-width: 48em) 100vw, 768px\" srcset=\"https://totetmatt.github.io/media/posts/51/responsive/miserable_final-xs.png 300w ,https://totetmatt.github.io/media/posts/51/responsive/miserable_final-sm.png 480w ,https://totetmatt.github.io/media/posts/51/responsive/miserable_final-md.png 768w ,https://totetmatt.github.io/media/posts/51/responsive/miserable_final-lg.png 1024w ,https://totetmatt.github.io/media/posts/51/responsive/miserable_final-xl.png 1360w ,https://totetmatt.github.io/media/posts/51/responsive/miserable_final-2xl.png 1600w\" alt=\"Image description\" width=\"4096\" height=\"4096\" /></figure></p>\n<p><figure class=\"post__image\"><img loading=\"lazy\" src=\"https://totetmatt.github.io/media/posts/51/rfc_final-2.png\" sizes=\"(max-width: 48em) 100vw, 768px\" srcset=\"https://totetmatt.github.io/media/posts/51/responsive/rfc_final-2-xs.png 300w ,https://totetmatt.github.io/media/posts/51/responsive/rfc_final-2-sm.png 480w ,https://totetmatt.github.io/media/posts/51/responsive/rfc_final-2-md.png 768w ,https://totetmatt.github.io/media/posts/51/responsive/rfc_final-2-lg.png 1024w ,https://totetmatt.github.io/media/posts/51/responsive/rfc_final-2-xl.png 1360w ,https://totetmatt.github.io/media/posts/51/responsive/rfc_final-2-2xl.png 1600w\" alt=\"Image description\" width=\"4096\" height=\"4096\" /></figure></p>\n",
"image": "https://totetmatt.github.io/media/posts/51/output-2.png",
"author": {
"name": "Totetmatt"
},
"tags": [
"rendering",
"map",
"graph",
"gmic",
"Gephi"
],
"date_published": "2023-02-26T13:47:52+01:00",
"date_modified": "2023-02-26T14:36:52+01:00"
},
{
"id": "https://totetmatt.github.io/gephis-twitter-streaming-importer-v2-is-out.html",
"url": "https://totetmatt.github.io/gephis-twitter-streaming-importer-v2-is-out.html",
"title": "Gephi's Twitter Streaming Importer V2 is Out !",
"summary": "Important notes : The old version of the plugin is deprecated, the latest version will be the 1.4.4 and then won’t be updated anymore. The…",
"content_html": "<p><strong>Important notes</strong> : <em>The old version of the plugin is deprecated, the latest version will be the 1.4.4 and then won’t be updated anymore.</em></p>\n<h1 id=\"why-a-v2-\">Why a V2 ?</h1>\n<p>The old version of the plugin is using the Twitter Streaming API v1 which is currently getting deprecated by twitter, with the consequence that most of the new users of the plugin are getting the famous “HTTP 403” error and can’t get the plugin working.</p>\n<p>The Twitter Streaming Importer V2 is now using the new Twitter API v2. You still need to have a developer account and an application that can use the V2 version of the API (that should be the nominal case now). </p>\n<h1 id=\"what-changes-\">What changes ?</h1>\n<h2 id=\"bearer-token\">Bearer Token</h2>\n<p>The old V1 and the new V2 API are slightly different, so you will need to reconfigure the credentials configuration inside the plugin. Instead of the Access API / Token set of credentials, you now only need the Bearer Token that you can generate and get on your Twitter application account.</p>\n<h2 id=\"query-rules\">Query Rules</h2>\n<p>The “query” is now fully handled by Twitter. The .json file to save your query won’t be backported as there is a fundamentally different approach now to querying the stream on twitter api. Please read how to create rules on the official twitter documentation about filtered streams <a href=\"https://developer.twitter.com/en/docs/twitter-api/tweets/filtered-stream/integrate/build-a-rule\">https://developer.twitter.com/en/docs/twitter-api/tweets/filtered-stream/integrate/build-a-rule</a> .</p>\n<p>This new way of building rules has multiple advantages,\nThe rules are saved and bound to your application / Bearer Token, which means it will stay if you close Gephi and re-open it.</p>\n<p>You can add and remove rules without restarting the running stream.\nYou can have multiple rules. These rules can be flagged with a ‘tag’.\nThe plugin will use these rules tags to create new columns on the nodes so you can check what rules the entity is matching.</p>\n<p>Again, as this mechanism is purely controlled by twitter api, please read the official doc for more detailed information.</p>\n<h2 id=\"other-details\">Other details</h2>\n<p>The api v2 has also changed how twitter retrieves the information. Moreover the plugin has to migrate from using Twitter4J library to official twitter-api-java-sdk (<a href=\"https://github.com/twitterdev/twitter-api-java-sdk\">https://github.com/twitterdev/twitter-api-java-sdk</a> ) .</p>\n<p>These changes implied rewriting the networklogic to support the new way data is getting gathered. Fortunately it was not that hard to do the rewriting and was also a chance to review some of the logic to fix some bugs. The networklogic should react mostly the same way as on the old version.</p>\n<p>During the rewriting, some minor optimisation has been done, notably the issue that the creation of the entities shouldn’t be now too much behind when using Force Atlas.</p>\n",
"image": "https://totetmatt.github.io/media/posts/50/KodeLife-2022-05-29-at-20.57.32-0000_2.png",
"author": {
"name": "Totetmatt"
},
"tags": [
"twitter",
"graph",
"Real-time"
],
"date_published": "2022-06-24T17:36:09+02:00",
"date_modified": "2022-06-24T19:47:05+02:00"
},
{
"id": "https://totetmatt.github.io/how-to-capture-your-bonzomatic-with-ffmpeg.html",
"url": "https://totetmatt.github.io/how-to-capture-your-bonzomatic-with-ffmpeg.html",
"title": "How to Capture your Bonzomatic with FFmpeg",
"summary": "Got to work on this website https://psenough.github.io/shader_summary/ that try to gather all graphical live coding events performed in the past. Basically, for people that don’t…",
"content_html": "<p>Got to work on this website <a href=\"https://psenough.github.io/shader_summary/\">https://psenough.github.io/shader_summary/</a> that try to gather all graphical live coding events performed in the past.</p>\n<p>Basically, for people that don’t know, it’s live coding performance done, sometime as a competition, sometime as a jam, where folks create real time graphics stuff.</p>\n<p>One of the common tool used is <a href=\"https://github.com/TheNuSan/Bonzomatic/releases/tag/v11\">Bonzomatic</a> , it’s a simple application that use OpenGL to render a rectangle that fit to the application screen and then you can live edit the Framgment Shader that determine what should be the color of the pixel.</p>\n<p>Problem was we got a lot of entries but no preview images. Which is quite sad for a graphics discipline.</p>\n<p>After spending an afternoon coding into bonzomatic to find a way to export the bufferframe to an image (was almost here, I think I was missing some color format alignment) I thought about maybe a simpler solution using the best tool ever : FFmpeg.</p>\n<p>If we look at the website, there is a way (in Windows at least) to capture an application windows (<a href=\"https://trac.ffmpeg.org/wiki/Capture/Desktop\">https://trac.ffmpeg.org/wiki/Capture/Desktop</a>) .</p>\n<p>So using this <code>gdigrab</code> format and using the window’s name, you can capture the input like this :</p>\n<pre><code class=\"language-bash\">ffmpeg -f gdigrab -i 'title=BONZOMATIC - GLFW' -vframes 1 -q:v 2 -y snapshot.jpg\n</code></pre>\n<p>Some notes :</p>\n<ul>\n<li>It will also capture the mouse if it’s inside, so be careful (maybe an option)</li>\n<li>If you don’t use fullscreen, it will capture only the “content” of the window, not the menu bar. Which mean that if you maximise, the output resolution will be the screen resolution minus the menu bar + other window frame.</li>\n<li>You might want to add a <code>-s 1</code> before the input to let the application start and / or let ffmpeg get warm before starting a record</li>\n</ul>\n<p>Of course now you can also export as video.\nHere is an example of a <code>ffmpeg</code> command that render 10 seconds to mp4 :</p>\n<pre><code class=\"language-bash\">ffmpeg -ss 1 -t 10 -y -framerate 60 -f gdigrab -vsync 0 -hwaccel cuda -hwaccel_output_format cuda -i 'title=BONZOMATIC - GLFW' -c:a copy -c:v h264_nvenc -tune hq -b:v 20M -bufsize 20M -maxrate 50M -qmin 0 -g 250 -bf 3 -b_ref_mode middle -temporal-aq 1 -rc-lookahead 20 -i_qfactor 0.75 -b_qfactor 1.1 out.mp4\n\n# (I copy pasted some config + blind tweak. Can't really explain the options but was happy with result)\n</code></pre>\n<p>I’m using <code>nvidia_env</code> as it’s much faster for encoding and avoid issues I got with the normal libx264. </p>\n<p>Need to check more in detail for Sound capture at the same time and see if there is different input format to play with.</p>\n",
"image": "https://totetmatt.github.io/media/posts/47/NlXGDH.jpg",
"author": {
"name": "Totetmatt"
},
"tags": [
"glsl",
"ffmpeg",
"bonzomatic"
],
"date_published": "2021-06-05T09:52:56+02:00",
"date_modified": "2021-06-05T10:18:00+02:00"
},
{
"id": "https://totetmatt.github.io/twitch-and-ffmpeg-with-some-youtube-dl-help-fetch-from-live-stream-to-local-file.html",
"url": "https://totetmatt.github.io/twitch-and-ffmpeg-with-some-youtube-dl-help-fetch-from-live-stream-to-local-file.html",
"title": "Twitch and FFmpeg and Youtube-dl: Fetch from live stream to local file",
"summary": "(Using Windows PowerShell, adapt for UNIX bash shouldn’t be a big issue) So something nice with youtube-dl is like you can ask not to download…",
"content_html": "<p><em>(Using Windows PowerShell, adapt for UNIX bash shouldn’t be a big issue)</em> </p>\n<h1 id=\"record-a-live-stream\">Record a live stream</h1>\n<p>So something nice with youtube-dl is like you can ask not to download the media but to fetch for the media link :</p>\n<pre><code>>> youtube-dl -g http://something\nyoutube-dl -g https://www.youtube.com/watch?v=RJt01u4yrLQ\nhttps://r2---sn-h0jeened.googlevideo.com/videoplayback?expire=1[...]\n</code></pre>\n<p>If you use that for a twitch channel that is streaming live, it returns you the HLS stream.</p>\n<pre><code>>> youtube-dl -g https://www.twitch.tv/farore_de_firone\nhttps://video-weaver.ber01.hls.ttvnw.net/v1/playlist/CpkEQusnrcdffNI3[..]MA2c4.m3u8\n</code></pre>\n<p>It by default binds to the best quality video, but you can check and select all format available using <code>-F</code></p>\n<pre><code>>> youtube-dl -F https://www.twitch.tv/farore_de_firone\n[twitch:stream] farore_de_firone: Downloading stream GraphQL\n[twitch:stream] farore_de_firone: Downloading access token JSON\n[twitch:stream] 39653517372: Downloading m3u8 information\n[info] Available formats for 39653517372:\nformat code extension resolution note\naudio_only mp4 audio only 2k , mp4a.40.2\n160p mp4 284x160 230k , avc1.4D401F, 30.0fps, mp4a.40.2\n360p mp4 640x360 630k , avc1.4D401F, 30.0fps, mp4a.40.2\n480p mp4 852x480 1262k , avc1.4D401F, 30.0fps, mp4a.40.2\n720p60 mp4 1280x720 3257k , avc1.4D401F, 60.0fps, mp4a.40.2\n1080p60__source_ mp4 1920x1080 6713k , avc1.64002A, 60.0fps, mp4a.40.2 (best)\n</code></pre>\n<p><em>You could even only have the audio-stream</em></p>\n<p>And to select it </p>\n<pre><code>>> youtube-dl -f 160p -g https://www.twitch.tv/farore_de_firone\n</code></pre>\n<p>So with this link, you can use ffmpeg to record localy the stream to your computer (and have your own reaplay / VOD without the “disagreement” of Twitch VOD :) )</p>\n<pre><code>>> ffmpeg -i "$(youtube-dl -f 720p60 -g https://www.twitch.tv/farore_de_firone)" -c copy stream.20201012.mp4\n</code></pre>\n<p>And here it’s quite simple “dump” of the running script. Nothing prevent you to add some filters, reencoding that adapt to your needs.</p>\n<h1 id=\"mixing-multiple-stream\">Mixing multiple stream</h1>\n<p>Let’s have some fun, there is some streamers that plays together on the same game. Usually, you can watch their POV at the sametime with the <strong>Twitch Squad</strong> mechanism or <strong>Multitwitch</strong> application. But would it be possible to record a file in such way ? </p>\n<p>Actually yes, ffmpeg can take multiple video input and transform it on the fly via the filter complex to render all the video on the same stream.</p>\n<p>There is a nice topic on Stackoverflow that explain how to simply stack multiple video : <a href=\"https://stackoverflow.com/questions/11552565/vertically-or-horizontally-stack-mosaic-several-videos-using-ffmpeg\">https://stackoverflow.com/questions/11552565/vertically-or-horizontally-stack-mosaic-several-videos-using-ffmpeg</a></p>\n<p>Example merging 2 videos :</p>\n<pre><code>>> ffmpeg -i "$(youtube-dl -g https://www.twitch.tv/antoinedaniellive)" \\\n -i "$(youtube-dl -g https://www.twitch.tv/soon)" \\\n -filter_complex vstack=inputs=2 \\\n -map 0:a \\\n output.mp4\n</code></pre>\n<p>Example merging 4 videos : </p>\n<pre><code>>> ffmpeg -i "$(youtube-dl -f 160p -g https://www.twitch.tv/antoinedaniellive)" \\\n -i "$(youtube-dl -f 160p -g https://www.twitch.tv/soon)" \\\n -i "$(youtube-dl -f 160p -g https://www.twitch.tv/angledroit )" \\\n -i "$(youtube-dl -f 160p -g https://www.twitch.tv/etoiles)" \\\n -filter_complex "[0:v][1:v][2:v][3:v]xstack=inputs=4:layout=0_0|w0_0|0_h0|w0_h0[v]" -map "[v]" \\\n -map 0:a \\\n -y output.mp4\n</code></pre>\n<p><em>Note</em> : it’s better to stack video vertically. Horizontal stack does works but then some services (like twitter) won’t accept the video because the image ratio will be too extreme.</p>\n<p>The <code>-map 0:a </code> here is necessary to select which audio you want to have.</p>\n<p>The format mkv also allow to record multiple video stream within one file that you can selected after that : </p>\n<pre><code>>> ffmpeg -i "$(youtube-dl -g https://www.twitch.tv/alphacast)" \\ \n -i "$(youtube-dl -g https://www.twitch.tv/colas_bim)" \\\n -i "$(youtube-dl -g https://www.twitch.tv/eventisfr)" \\\n -i "$(youtube-dl -g https://www.twitch.tv/fusiow)" \\\n -map 0:1 -map 1:1 -map 2:1 -map 3:1 -map 0:0 \\ \n -c copy \\\n out.mkv\n</code></pre>\n",
"author": {
"name": "Totetmatt"
},
"tags": [
],
"date_published": "2020-10-13T20:04:57+02:00",
"date_modified": "2020-10-13T20:04:57+02:00"
},
{
"id": "https://totetmatt.github.io/extract-chapters-youtube-media.html",
"url": "https://totetmatt.github.io/extract-chapters-youtube-media.html",
"title": "Extract chapters from Youtube Media",
"summary": "Youtube recently got this “chapter” concept where it fragment a long video with chapters. I think this data might be parsed from the description of…",
"content_html": "<p>Youtube recently got this “chapter” concept where it fragment a long video with chapters. I think this data might be parsed from the description of the video done, as they already parse any timestamp available for a while now.</p>\n<p>Thanks to youtube-dl, we can download thena video and the metadata which now contains this chapter data. </p>\n<pre><code class=\"language-bash\">$ youtube-dl --write-info-json -x --audio-format mp3 https://www.youtube.com/watch?v=HZTStHzWRxM\n[youtube] HZTStHzWRxM: Downloading webpage\n[info] Writing video description metadata as JSON to: The New Youtube Chapter Timestamp Feature-HZTStHzWRxM.info.json\n[download] Destination: The New Youtube Chapter Timestamp Feature-HZTStHzWRxM.webm\n[download] 100% of 3.22MiB in 00:00\n[ffmpeg] Destination: The New Youtube Chapter Timestamp Feature-HZTStHzWRxM.mp3\nDeleting original file The New Youtube Chapter Timestamp Feature-HZTStHzWRxM.webm (pass -k to keep)\n</code></pre>\n<p>We will use <a href=\"https://www.youtube.com/watch?v=HZTStHzWRxM\">https://www.youtube.com/watch?v=HZTStHzWRxM</a> as example.</p>\n<p>The command above will download the video file, transcode it to mp3 and also download the metadata in a json format. We have now 2 files :</p>\n<ul>\n<li><code>The New Youtube Chapter Timestamp Feature-HZTStHzWRxM.info.json</code> that contains data </li>\n<li><code>The New Youtube Chapter Timestamp Feature-HZTStHzWRxM.mp3</code> that is the media</li>\n</ul>\n<p><code>jq</code> is a wonderful command line to manipulate json on bash. We can for example get the title of the video like this :</p>\n<pre><code class=\"language-bash\">$ cat The\\ New\\ Youtube\\ Chapter\\ Timestamp\\ Feature-HZTStHzWRxM.info.json | jq -r .title | sed -e 's/[^A-Za-z0-9._-]/_/g'\n\nThe_New_Youtube_Chapter_Timestamp_Feature\n</code></pre>\n<p>The <code>sed</code> here is to make sure we won’t have special characters that might lead to some error later.</p>\n<p>The <code>-r</code> on <code>jq</code> indicate to return “raw text”. By default, <code>jq</code> will use some syntax colorization and keep some sepcial character that might leads to some issue. </p>\n<p>If available, Youtube-dl info json contains a <code>chapters</code> array that contain all the chapters with their <code>start_time</code> , <code>end_time</code> and <code>title</code> .</p>\n<pre><code class=\"language-bash\">$ cat The\\ New\\ Youtube\\ Chapter\\ Timestamp\\ Feature-HZTStHzWRxM.info.json |\\\njq -r '.chapters[]'\n\n{\n "start_time": 0,\n "end_time": 17,\n "title": "The new feature"\n}\n{\n "start_time": 17,\n "end_time": 76,\n "title": "Slow roll-out"\n}\n{\n "start_time": 76,\n "end_time": 124,\n "title": "How it works"\n}\n{\n "start_time": 124,\n "end_time": 180,\n "title": "Problems / suggestions for the future"\n}\n</code></pre>\n<p>The idea now is to use each dict entry here as parameters for <code>ffmpeg</code> to split the media according to the chapters data. As we are in bash, current json representation will be quite hard to use it like that, so we need to transform a little bit the representation here to use the output of <code>jq</code> in a pipe and in <code>xargs</code>.</p>\n<p>What also we need to take into consideration is that <code>ffmpeg</code> can split a media by giving the option <code>-ss</code> to know where to start and <code>-t</code> to know the <strong>duration</strong> of the cut, <strong>not the end time</strong>. As the information on the json gives us a start and end time, we need to perfom a simple substraction to have the start time and the duration.</p>\n<pre><code class=\"language-bash\">$ cat The\\ New\\ Youtube\\ Chapter\\ Timestamp\\ Feature-HZTStHzWRxM.info.json |\\\njq -r '.chapters[] | .start_time,.end_time-.start_time,.title ' |\\\nsed 's/"//g'\n\n0\n17\nThe new feature\n17\n59\nSlow roll-out\n76\n48\nHow it works\n124\n56\nProblems / suggestions for the future\n</code></pre>\n<p>Thanks to <code>jq</code>, we can perfom simple math operation directly on the command to compute the duration. <code>sed</code> here again is only for cleaning up special characters.</p>\n<p>Now, we can pipe the wonderful <code>xargs</code> to use the output as parameter and trigger a <code>ffmpeg</code> command</p>\n<pre><code class=\"language-bash\">$ cat The\\ New\\ Youtube\\ Chapter\\ Timestamp\\ Feature-HZTStHzWRxM.info.json|\\\njq -r '.chapters[] | .start_time,.end_time-.start_time,.title ' |\\\nsed -e 's/[^A-Za-z0-9._-]/_/g' |\\\nxargs -n3 -t -d'\\n' sh -c 'ffmpeg -y -ss $0 -i "The New Youtube Chapter Timestamp Feature-HZTStHzWRxM.mp3" -t $1 -codec:a copy "$2.mp3"'\n</code></pre>\n<ul>\n<li><code>-n3</code> indicate to take parameters 3 by 3*</li>\n<li><code>-t</code> is only to debug as it will print each command <code>xargs</code> will execute</li>\n<li><code>-d'\\n'</code> indicate that parameters are separated by <code>\\n</code></li>\n</ul>\n<p>What is cool is that we could potentially parallelize the process here by adding to <code>xargs</code> the parameter <code>-P X</code> to run the multiple <code>ffmpeg</code> invokation in parallel.</p>\n<p>On <code>ffmpeg</code> side, nothing tremendous : </p>\n<ul>\n<li><code>-ss</code> and <code>-t</code> has been already explain as start time and duration,</li>\n<li><code>-codec:a copy</code> indicate that we keep everything same as the original file in terms of codec, so no re-encoding for the output file, which means it’s going fast </li>\n<li><code>-y</code> to avoid prompt and force override of existing output file</li>\n</ul>\n<p>That works quite well. It might be possible to fully one line it, but let’s put a proper script to ease the usage of this.</p>\n<pre><code class=\"language-bash\">#!/bin/sh\nset -x\n\n#Download media + metadata\nyoutube-dl --write-info-json -x --audio-format mp3 -o "tmp_out.%(ext)s" $1\n\n# Maybe a way to get the file name from previous function\nINFO="tmp_out.info.json"\nAUDIO="tmp_out.mp3"\necho :: $INFO $AUDIO ::\n# Fetch the title\nTITLE=$(cat "$INFO" | jq -r .title | sed -e 's/[^A-Za-z0-9._-]/_/g' )\n # ^--- Remove all weird character as we want to use it as filename\n# We will put all chapter into a directory\nmkdir "$TITLE"\n\n# Chapterization\ncat "$INFO" |\\\njq -r '.chapters[] | .start_time,.end_time-.start_time,.title ' |\\\nsed -e 's/[^A-Za-z0-9._-]/_/g' |\\\nxargs -n3 -t -d'\\n' sh -c "ffmpeg -y -ss \\$0 -i \\"$AUDIO\\" -to \\$1 -codec:a copy -f mp3 \\"$TITLE/\\$2.mp3\\""\n\n#Remove tmp file\nrm tmp_out*\n</code></pre>\n<p>The script file here : <a href=\"https://gist.github.com/totetmatt/b4bf50c62642e5a9e1bf6365a47e19c6\">https://gist.github.com/totetmatt/b4bf50c62642e5a9e1bf6365a47e19c6</a></p>\n<p>No big change on the global approach just something to becareful : Yes, there is a hell quote escape game to play and it might not be pleasant ….</p>\n<p>To explain the last part, as far as I understand it, the string will be evaluated multiple time : </p>\n<ul>\n<li>First time will be at “script level”, so it will replace any <code>$VARIABLE</code> present in the script like <code>$AUDIO</code> and <code>$TITLE</code> </li>\n<li>Second time will be at <code>xargs / sh -c</code> invokation where then it’s possible to use <code>$0 $1 and $2</code>. But if we don’t escape it first, theses variables will be evaluated at the first round, that’s why we need to backslash it <code>\\$0, \\$1, \\$2</code>.</li>\n</ul>\n<p>You can see the result of the string after the 1st evaluation thanks to the <code>-t</code> option of <code>xargs</code> : </p>\n<pre><code class=\"language-bash\">sh -c 'ffmpeg -y -ss $0 -i "The New Youtube Chapter Timestamp Feature-HZTStHzWRxM.mp3" -to $1 -codec:a copy -f mp3 "The_New_Youtube_Chapter_Timestamp_Feature/$2.mp3"' 124 56 Problems___suggestions_for_the_future\n</code></pre>\n<p>There might be other and better way to deal wih the args parsing, the string escape and the string cleanup, but current solution works enough :)</p>\n",
"author": {
"name": "Totetmatt"
},
"tags": [
"youtube",
"ffmpeg",
"bash"
],
"date_published": "2020-06-26T01:24:24+02:00",
"date_modified": "2020-06-26T10:39:25+02:00"
},
{
"id": "https://totetmatt.github.io/bash-sort-2.html",
"url": "https://totetmatt.github.io/bash-sort-2.html",
"title": "Bash Sort",
"summary": "From time to time, we got some code challenge / codekata at work. The concept is simple it’s just some simple problem to sovles so…",
"content_html": "<p>From time to time, we got some code challenge / codekata at work. The concept is simple it’s just some simple problem to sovles so that we can share. I usualy do one version in a language like <strong>Scala</strong>, <strong>Python</strong> or other more or less fancy language like <strong>C++</strong> or even <strong>Haskell</strong> but I also try to have a <strong>Bash</strong> version, with extra point if I can oneline it.</p>\n<p>Here it was about sorting Star Wars movies from their story chronological order starting from a list with the release order.</p>\n<pre><code class=\"language-bash\">## Generation of files ##\n\ncat > movies.txt <<EOL\nA New Hope (1977)\nThe Empire Strikes Back (1980)\nReturn of the Jedi (1983)\nThe Phantom Menace (1999)\nAttack of the Clones (2002)\nRevenge of the Sith (2005)\nThe Force Awakens (2015)\nRogue One: A Star Wars Story (2016)\nThe Last Jedi (2017)\nSolo: A Star Wars Story (2018)\nEOL\n\ncat > order.txt <<EOL\n4 \n5\n6\n10\n8\n1\n2\n3\n7\n9\nEOL\n\ncat > expect.txt <<EOL\nThe Phantom Menace (1999)\nAttack of the Clones (2002)\nRevenge of the Sith (2005)\nSolo: A Star Wars Story (2018)\nRogue One: A Star Wars Story (2016)\nA New Hope (1977)\nThe Empire Strikes Back (1980)\nReturn of the Jedi (1983)\nThe Force Awakens (2015)\nThe Last Jedi (2017)\nEOL\n</code></pre>\n<p>Each line of <code>order.txt</code> tells which line of <code>movies.txt</code> to substitute so we can’t just <code>paste order.txt movies.txt | sort -n</code> because we need to find a way to extract the <strong>nth</strong> line of the <code>movies.txt</code> </p>\n<p>To do that, we can use <code>sed -n "Np" file</code>, <code>N</code> is the line number to get, <code>p</code> is the sed command to “print”. <code>-n</code> is needed as by default, sed will print the file anyway.</p>\n<pre><code class=\"language-bash\">totetmatt$ sed -n 10p movies.txt\n\nSolo: A Star Wars Story (2018)\n</code></pre>\n<p>We can then wire this with the output of the <code>order.txt</code> via a pipeline and a <code>xargs</code>. Let’s also add a <code>tee</code> at the end so it stores the result in a file while keeping the possibilites to use the output for other operation.</p>\n<pre><code class=\"language-bash\">cat order.txt | xargs -I % sed -n "%p" movies.txt | tee result.txt\n\nThe Phantom Menace (1999)\nAttack of the Clones (2002)\nRevenge of the Sith (2005)\nSolo: A Star Wars Story (2018)\nRogue One: A Star Wars Story (2016)\nA New Hope (1977)\nThe Empire Strikes Back (1980)\nReturn of the Jedi (1983)\nThe Force Awakens (2015)\nThe Last Jedi (2017)\n</code></pre>\n<p>We could add some automated check to be sure the operation works as expected. Good solution will be to use <code>diff result.txt expect.txt</code> as we have 2 files generated. But let’s say that we don’t have the <code>result.txt</code> , only the command output and still want to use the <code>diff</code> that only accept file as input.</p>\n<p>We can then use <code><(command)</code> to concider the whole command output as an input file for <code>diff</code>. </p>\n<pre><code class=\"language-bash\">diff <(cat order.txt | xargs -I % sed -n "%p" movies.txt | tee result.txt) expect.txt || echo "No ok"\n</code></pre>\n<p><a href=\"https://gist.github.com/totetmatt/2b4c74eb214fcc6d04ffcd39bdbd43ad\">https://gist.github.com/totetmatt/2b4c74eb214fcc6d04ffcd39bdbd43ad</a></p>\n",
"author": {
"name": "Totetmatt"
},
"tags": [
"bash"
],
"date_published": "2020-06-20T11:05:17+02:00",
"date_modified": "2020-06-20T11:05:49+02:00"
},
{
"id": "https://totetmatt.github.io/ffmpeg.html",
"url": "https://totetmatt.github.io/ffmpeg.html",
"title": "FFMpeg",
"summary": "Some findings and command line I'm using regularly with ffmpeg. http://www.astro-electronic.de/FFmpeg_Book.pdf https://engineering.giphy.com/how-to-make-gifs-with-ffmpeg/ ffmpeg -i inputd.mp4 -filter_complex \"[0:v] fps=12,scale=w=480:h=-1,split [a][b];[a] palettegen=stats_mode=single [p];[b][p] paletteuse=new=1\" output.gif https://superuser.com/questions/777938/ffmpeg-convert-a-video-to-a-timelapse ffmpeg…",
"content_html": "<p>Some findings and command line I'm using regularly with ffmpeg.</p>\n<h2>General User Doc</h2>\n<p><a href=\"http://www.astro-electronic.de/FFmpeg_Book.pdf\">http://www.astro-electronic.de/FFmpeg_Book.pdf</a></p>\n<p> </p>\n<h2>Create gif</h2>\n<p><a href=\"https://engineering.giphy.com/how-to-make-gifs-with-ffmpeg/\">https://engineering.giphy.com/how-to-make-gifs-with-ffmpeg/</a></p>\n<p><code>ffmpeg -i inputd.mp4 -filter_complex \"[0:v] fps=12,scale=w=480:h=-1,split [a][b];[a] palettegen=stats_mode=single [p];[b][p] paletteuse=new=1\" output.gif</code></p>\n<h2>Timelapse</h2>\n<p><a href=\"https://superuser.com/questions/777938/ffmpeg-convert-a-video-to-a-timelapse\">https://superuser.com/questions/777938/ffmpeg-convert-a-video-to-a-timelapse</a></p>\n<p><code>ffmpeg -i input.mp4 -filter:v \"setpts=0.5*PTS\" -an output.mp4</code></p>\n<p><a href=\"http://mahugh.com/2015/04/29/creating-time-lapse-videos/\">http://mahugh.com/2015/04/29/creating-time-lapse-videos/</a></p>\n<p><a href=\"http://social.d-e.gr/techblog/posts/12-smoother-timelapses-ffmpeg\">http://social.d-e.gr/techblog/posts/12-smoother-timelapses-ffmpeg</a></p>\n<p><code>ffmpeg -i input -vf \"tblend=average,framestep=2,tblend=average,framestep=2,tblend=average,framestep=2,tblend=average,framestep=2,setpts=0.25*PTS\" -r 96 -b:v 30M -crf 10 -an output</code></p>",
"author": {
"name": "Totetmatt"
},
"tags": [
"video",
"ffmpeg"
],
"date_published": "2020-06-18T21:26:35+02:00",
"date_modified": "2020-06-20T01:38:40+02:00"
},
{
"id": "https://totetmatt.github.io/keras-and-gephi-visualize-your-deep-learning-graph.html",
"url": "https://totetmatt.github.io/keras-and-gephi-visualize-your-deep-learning-graph.html",
"title": "Keras and Gephi : Visualize your Deep Learning Graph",
"summary": "If you work on Machine Learning / Deep Learning with Keras, you can export the model in a dot file. And guess what ? Gephi…",
"content_html": "<p>If you work on Machine Learning / Deep Learning with Keras, you can export the model in a dot file. And guess what ? Gephi can read dot files ! :D</p>\n<p>To do that use this code (adapt it for your usecase)</p>\n<p><a href=\"https://gist.github.com/totetmatt/dcc85d27b0fdfd79513cbe43201f507f\">https://gist.github.com/totetmatt/dcc85d27b0fdfd79513cbe43201f507f</a></p>\n<pre>from keras.applications import *\nfrom keras.utils import plot_model\n# [..]\n# model = ...\n# Get your own model here\n# [..]\nmodel = NASNetMobile() #Example with NASNetMobile\nplot_model(model,show_shapes=False, to_file='model.dot')</pre>\n<p>Then it will generate a <em>model.dot</em> file that you can open directly into Gephi !</p>\n<figure class=\"alignnone size-medium wp-image-569\"><img loading=\"lazy\" src=\"https://totetmatt.github.io/media/posts/40/screenshot_111609-300x225.png\" sizes=\"(max-width: 48em) 100vw, 768px\" srcset=\"https://totetmatt.github.io/media/posts/40/responsive/screenshot_111609-300x225-xs.png 300w ,https://totetmatt.github.io/media/posts/40/responsive/screenshot_111609-300x225-sm.png 480w ,https://totetmatt.github.io/media/posts/40/responsive/screenshot_111609-300x225-md.png 768w ,https://totetmatt.github.io/media/posts/40/responsive/screenshot_111609-300x225-lg.png 1024w ,https://totetmatt.github.io/media/posts/40/responsive/screenshot_111609-300x225-xl.png 1360w ,https://totetmatt.github.io/media/posts/40/responsive/screenshot_111609-300x225-2xl.png 1600w\" alt=\"\" width=\"300\" height=\"225\" /></figure>\n",
"author": {
"name": "Totetmatt"
},
"tags": [
],
"date_published": "2018-03-24T11:20:41+01:00",
"date_modified": "2020-06-20T00:35:15+02:00"
},
{
"id": "https://totetmatt.github.io/twitter-streaming-importer-naoyun-as-a-gephi-plugin.html",
"url": "https://totetmatt.github.io/twitter-streaming-importer-naoyun-as-a-gephi-plugin.html",
"title": "Twitter Streaming Importer : Naoyun as a Gephi Plugin",
"summary": "Hello everybody ! Great news today, almost a 5 years acheivement : Twitter Streaming Importer is out. It uses the Twitter Stream API to get…",
"content_html": "<p>Hello everybody !</p>\n<p>Great news today, almost a 5 years acheivement : Twitter Streaming Importer is out.</p>\n<p>It uses the Twitter Stream API to get current tweets and display them as a graph in realtime in gephi.</p>\n<p>Is basically a simple version of Naoyun embeeded to Gephi, which will be easier to use for everybody I hope.</p>\n<p>It embeed the 3 main network logic with little update on it :</p>\n<ul>\n<li><strong>User Network</strong> : Still a User to User network, but with Gephi 0.9 , we can now have parallel edges, which means now this network logic will differentiate a \"Retweet\" and a \"Mention\". Moreover, each reference with update the weight of the edges.</li>\n<li><strong>Smart Full network</strong> : Creates a full graph of a tweet activity.</li>\n<li><strong>Hashtag Network</strong> : Doing a graph based only on hashtags.</li>\n</ul>\n<p>Just download it from Gephi, in <strong>Tools > Plugin </strong>and follow the steps.</p>\n<p>You will need to have a twitter account and to create a dummy application here <a href=\"https://apps.twitter.com/\">https://apps.twitter.com/</a> to use the plugin.</p>\n<h2>What's on the pipe for next version of the plugin</h2>\n<ul>\n<li><strong>Enhance the data</strong> : For the moment, only the \"label\" is used, in the future it should be possible to have all the metadata from a tweet , a user etc....</li>\n<li><strong>Twitter API Key</strong> : It's a persistant problem , the model of Twitter to access their API isn't design for Open source desktop project. It need to check with the Gephi guys how it would be possible to ease the Key registration for the User.</li>\n<li><strong>Custom Network Logic</strong> : It's technically possible today to have your own Network logic used in the Plugin. The process and the way to do it just need to be reviewed.</li>\n<li>Access to the <strong>sample stream api</strong></li>\n<li>Adding possibility to track per<strong> Localisation</strong></li>\n</ul>\n<h2>It's not the end for Naoyun</h2>\n<p>Naoyun won't die and will keep some specificities that could not be transfered to the Gephi Plugin. But for the moment, the developpement is slowed down due to some dependencies issues and permanent refactoring.</p>\n",
"author": {
"name": "Totetmatt"
},
"tags": [
"Gephi"
],
"date_published": "2016-04-25T20:33:47+02:00",
"date_modified": "2020-06-20T00:35:15+02:00"
}
]
}