33
I Use This!
Activity Not Available

News

Analyzed 4 months ago. based on code collected about 6 years ago.
Posted over 11 years ago
So 2013 is finally over and it’s been an energy-sapping year, business-, baby-, and building wise. Business. The stagnation that was present for pretty much the first half of the year and which forced us to downsize a bit, had been replaced by too ... [More] many projects all at once in the 2nd half of the year. And while it was welcome since it saved us from closing doors, it prevented working on our private projects, i.e. our apps in the store, but also personal pet projects – let alone anything open source. Baby. After 7 horrible months between Lara Marie’s 5th and 13th month, she finally began sleeping great, often 12 hours without waking up. She’s now 2.5 years and everything is good. Still, she’s a demanding little one, enjoying being offered a selection of everything instead of deciding on her behalf. I love her. Building. With a bit of (natural) delay, our new house was finished by November and we did move on 9th of December. We’re now 2 months in here and it’s feeling mostly great. We had to monitor and decide on a LOT of things during the construction phase, but apart from the usual minor issues, the building quality is good and we enjoy the comfort of having a dedicated room for Lara Marie. Being able to use the living room again after 20:00 is nice I took the liberty to install a dedicated server for the house which is living in a 19″ rack in the utility room. I’m going to post about the networking infrastructure soon. Referring to my last post, I’m still planning on doing the sabbatical, but due to some unforeseeable circumstances with my wife’s health, it had to be postponed for a bit. It’s going to happen in 2014 though, which is why I’m sure, 2014 is going to be better than 2013. Der Beitrag Welcome, 2014 erschien zuerst auf Vanille.de. [Less]
Posted over 11 years ago
As I, and many others have written before, on mobile, rendering/processing of JS is done asynchronously to responding to the user scrolling, so that we can maintain touch response and screen update. We basically have no chance of consistently hitting ... [More] 60fps if we don’t do this (and you can witness what happens if you don’t by running desktop Firefox (for now)). This does mean, however, that you end up with bugs like this, where people respond in JavaScript to the scroll position changing and end up with jerky animation because there are no guarantees about the frequency or timeliness of scroll position updates. It also means that neat parallax sites like this can’t be done in quite the same way on mobile. Although this is currently only a problem on mobile, this will eventually affect desktop too. I believe that Internet Explorer already uses asynchronous composition on the desktop, and I think that’s the way we’re going in Firefox too. It’d be great to have a solution for this problem first. It’s obvious that we could do with a way of declaring a link between a CSS property and the scroll position. My immediate thought is to do this via CSS. I had this idea for a syntax:scroll-transition-(x|y): <transition-declaration> [, <transition-declaration>]* where transition-declaration = <property>( <transition-stop> [, <transition-stop>]+ ) and transition-stop = <relative-scroll-position> <property-value>This would work quite similarly to standard transitions, where a limited number of properties would be supported, and perhaps their interpolation could be defined in the same way too. Relative scroll position is 0px when the scroll position of the particular axis matches the element’s offset position. This would lead to declarations like this:scroll-transition-y: opacity( 0px 0%, 100px 100%, 200px 0% ), transform( 0px scale(1%), 100px scale(100%), 200px scale(1%);This would define a transition that would grow and fade in an element as the user scrolled it towards 100px down the page, then shrink and fade out as you scrolled beyond that point. But then Paul Rouget made me aware that Anthony Ricaud had the same idea, but instead of this slightly arcane syntax, to tie it to CSS animation keyframes. I think this is more easily implemented (at least in Firefox’s case), more flexible and more easily expressed by designers too. Much like transitions and animations, these need not be mutually exclusive though, I suppose (though the interactions between them might mean as a platform developer, it’d be in my best interests to suggest that they should :)). I’m not aware of any proposal of this suggestion, so I’ll describe the syntax that I would expect. I think it should inherit from the CSS animation spec, but prefix the animation-* properties with scroll-. Instead of animation-duration, you would have scroll-animation-bounds. scroll-animation-bounds would describe a vector, the distance along which would determine the position of the animation. Imagine that this vector was actually a plane, that extended infinitely, perpendicular to its direction of travel; your distance along the vector is unaffected by your distance to the vector. In other words, if you had a scroll-animation-bounds that described a line going straight down, your horizontal scroll position wouldn’t affect the animation. Animation keyframes would be defined in the exact same way. [Edit] Paul Rouget makes the suggestion that rather than having a prefixed copy of animation, that a new property be introduced, animation-controller, of which the default would be time, but a new option could be scroll. We would still need an equivalent to duration, so I would re-purpose my above-suggested property as animation-scroll-bounds. What do people think about either of these suggestions? I’d love to hear some conversation/suggestions/criticisms in the comments, after which perhaps I can submit a revised proposal and begin an implementation. [Less]
Posted over 11 years ago
Drawing on some of my limited HTML5 games experience, and marginally less limited general games and app writing experience, I’d like to write a bit about efficient animation for games on the web. I usually prefer to write about my experiences, rather ... [More] than just straight advice-giving, so I apologise profusely for how condescending this will likely sound. I’ll try to improve in the future There are a few things worth knowing that will really help your game (or indeed app) run better and use less battery life, especially on low-end devices. I think it’s worth getting some of these things down, as there’s evidence to suggest (in popular and widely-used UI libraries, for example) that it isn’t necessarily common knowledge. I’d also love to know if I’m just being delightfully/frustratingly naive in my assumptions. First off, let’s get the basic stuff out of the way. Help the browser help you If you’re using DOM for your UI, which I’d certainly recommend, you really ought to use CSS transitions and/or animations, rather than JavaScript-powered animations. Though JS animations can be easier to express at times, unless you have a great need to synchronise UI animation state with game animation state, you’re unlikely to be able to do a better job than the browser. The reason for this is that CSS transitions/animations are much higher level than JavaScript, and express a very specific intent. Because of this, the browser can make some assumptions that it can’t easily make when you’re manually tweaking values in JavaScript. To take a concrete example, if you start a CSS transition to move something from off-screen so that it’s fully visible on-screen, the browser knows that the related content will end up completely visible to the user and can pre-render that content. When you animate position with JavaScript, the browser can’t easily make that same assumption, and so you might end up causing it to draw only the newly-exposed region of content, which may introduce slow-down. There are signals at the beginning and end of animations that allow you to attach JS callbacks and form a rudimentary form of synchronisation (though there are no guarantees on how promptly these callbacks will happen). Speaking of assumptions the browser can make, you want to avoid causing it to have to relayout during animations. In this vein, it’s worth trying to stick to animating only transform and opacity properties. Though some browsers make some effort for other properties to be fast, these are pretty much the only ones semi-guaranteed to be fast across all browsers. Something to be careful of is that overflow may end up causing relayouting, or other expensive calculations. If you’re setting a transform on something that would overlap its container’s bounds, you may want to set overflow: hidden on that container for the duration of the animation. Use requestAnimationFrame When you’re animating canvas content, or when your DOM animations absolutely must synchronise with canvas content animations, do make sure to use requestAnimationFrame. Assuming you’re running in an arbitrary browsing session, you can never really know how long the browser will take to draw a particular frame. requestAnimationFrame causes the browser to redraw and call your function before that frame gets to the screen. The downside of using this vs. setTimeout, is that your animations must be time-based instead of frame-based. i.e. you must keep track of time and set your animation properties based on elapsed time. requestAnimationFrame includes a time-stamp in its callback function prototype, which you most definitely should use (as opposed to using the Date object), as this will be the time the frame began rendering, and ought to make your animations look more fluid. You may have a callback that ends up looking something like this:var startTime = -1; var animationLength = 2000; // Animation length in milliseconds function doAnimation(timestamp) { // Calculate animation progress var progress = 0; if (startTime < 0) { startTime = timestamp; } else { progress = Math.min(1.0, animationLength / (timestamp - startTime)); } // Do animation ... if (progress < 1.0) { requestAnimationFrame(doAnimation); } } // Start animation requestAnimationFrame(doAnimation);You’ll note that I set startTime to -1 at the beginning, when I could just as easily set the time using the Date object and avoid the extra code in the animation callback. I do this so that any setup or processes that happen between the start of the animation and the callback being processed don’t affect the start of the animation, and so that all the animations I start before the frame is processed are synchronised. To save battery life, it’s best to only draw when there are things going on, so that would mean calling requestAnimationFrame (or your refresh function, which in turn calls that) in response to events happening in your game. Unfortunately, this makes it very easy to end up drawing things multiple times per frame. I would recommend keeping track of when requestAnimationFrame has been called and only having a single handler for it. As far as I know, there aren’t solid guarantees of what order things will be called in with requestAnimationFrame (though in my experience, it’s in the order in which they were requested), so this also helps cut out any ambiguity. An easy way to do this is to declare your own refresh function that sets a flag when it calls requestAnimationFrame. When the callback is executed, you can unset that flag so that calls to that function will request a new frame again, like this:function redraw() { drawPending = false; // Do drawing ... } var drawPending = false; function requestRedraw() { if (!drawPending) { drawPending = true; requestAnimationFrame(redraw); } }Following this pattern, or something similar, means that no matter how many times you call requestRedraw, your drawing function will only be called once per frame. Remember, that when you do drawing in requestAnimationFrame (and in general), you may be blocking the browser from updating other things. Try to keep unnecessary work outside of your animation functions. For example, it may make sense for animation setup to happen in a timeout callback rather than a requestAnimationFrame callback, and likewise if you have a computationally heavy thing that will happen at the end of an animation. Though I think it’s certainly overkill for simple games, you may want to consider using Worker threads. It’s worth trying to batch similar operations, and to schedule them at a time when screen updates are unlikely to occur, or when such updates are of a more subtle nature. Modern console games, for example, tend to prioritise framerate during player movement and combat, but may prioritise image quality or physics detail when compromise to framerate and input response would be less noticeable. Measure performance One of the reasons I bring this topic up, is that there exist some popular animation-related libraries, or popular UI toolkits with animation functions, that still do things like using setTimeout to drive their animations, drive all their animations completely individually, or other similar things that aren’t conducive to maintaining a high frame-rate. One of the goals for my game Puzzowl is for it to be a solid 60fps on reasonable hardware (for the record, it’s almost there on Galaxy Nexus-class hardware) and playable on low-end (almost there on a Geeksphone Keon). I’d have liked to use as much third party software as possible, but most of what I tried was either too complicated for simple use-cases, or had performance issues on mobile. How I came to this conclusion is more important than the conclusion itself, however. To begin with, my priority was to write the code quickly to iterate on gameplay (and I’d certainly recommend doing this). I assumed that my own, naive code was making the game slower than I’d like. To an extent, this was true, I found plenty to optimise in my own code, but it go to the point where I knew what I was doing ought to perform quite well, and I still wasn’t quite there. At this point, I turned to the Firefox JavaScript profiler, and this told me almost exactly what low-hanging-fruit was left to address to improve performance. As it turned out, I suffered from some of the things I’ve mentioned in this post; my animation code had some corner cases where they could cause redraws to happen several times per frame, some of my animations caused Firefox to need to redraw everything (they were fine in other browsers, as it happens – that particular issue is now fixed), and some of the third party code I was using was poorly optimised. A take-away To help combat poor animation performance, I wrote Animator.js. It’s a simple animation library, and I’d like to think it’s efficient and easy to use. It’s heavily influenced by various parts of Clutter, but I’ve tried to avoid scope-creep. It does one thing, and it does it well (or adequately, at least). Animator.js is a fire-and-forget style animation library, designed to be used with games, or other situations where you need many, synchronised, custom animations. It includes a handful of built-in tweening functions, the facility to add your own, and helper functions for animating object properties. I use it to drive all the drawing updates and transitions in Puzzowl, by overriding its requestAnimationFrame function with a custom version that makes the request, but appends the game’s drawing function onto the end of the callback, like so:animator.requestAnimationFrame = function(callback) { requestAnimationFrame(function(t) { callback(t); redraw(); }); };My game’s redraw function does all drawing, and my animation callbacks just update state. When I request a redraw outside of animations, I just check the animator’s activeAnimations property first to stop from mistakenly drawing multiple times in a single animation frame. This gives me nice, synchronised animations at very low cost. Puzzowl isn’t out yet, but there’s a little screencast of it running on a Nexus 5: Alternative, low-framerate YouTube link. [Less]
Posted over 11 years ago
The latest device in the OpenPhoenux open hardware familiy is the Neo900, the first true successor to the Nokia N900. The Neo900 is a joint project of the Openmoko veteran Jörg Reisenweber and the creators of the GTA04/Letux2804 open hardware ... [More] smartphone at Golden Delicious Computers. Furthermore, it is supported by the N900 Maemo5/Fremantle community, the Openmoko community and the OpenPhoenux community, who are working together to get closer to their common goal of providing an open hardware smartphone, which is able to run 100% free and open source software, while being independet of any big hardware manufacturer. OpenPhoenux Neo900 With the big ecosystem of free and open Maemo5/Fermantle applications, the hacker friendly N900, which provides an excelent hardware keyboard, the variety of free operating systems of the Openmoko community (SHR, QtMoko, Replicant, …) and the experience in designing and producing open hardware devices of the OpenPhoenux community (e.g. GTA04), they want to bring the best of all worlds together in one single device, the Neo900. The Neo900 is meant to be an upgraded N900, with a newly designed and more powerfull motherboard, which is based upon the existing and tested OpenPhoenux GTA04 design. Together with the nice housing of the N900 (e.g. slider, hardware keyboard, big screen, …), this is trying to get “the hackers most beloved device”. In the same spirit of the OpenPhoenux community, which created unique cases for their GTA04 devices out of aluminium, wood or 3D printing, there is also an effort to build an aluminium housing for the N900, which might lead to personalized and self-produced cases for the Neo900 in the future and thus the independence of 2nd hand N900 smartphones. Due to the fact that the Neo900′s new motherboard is very similar to the GTA04, it is possible to reuse most of the low level software stack like development tools, the Bootloader and the Linux Kernel from the GTA04 project, with just minor modifications applied. This will speed up the software development process of this new open hardware platform a lot! To fund the development and prototyping of this new open hardware device, which is made in Germany, a crowdfunding campain has been started a few days ago, in order to collect 25.000€ (which is by now already halfway reached!). Depending on the outcome of this fundraising the project might be able to provide better hardware specs than the following minimum keyfeature set: TI DM3730 CPU (OMAP3 ARM Cortex A8) with 1+ GHz 512+ MB RAM, 1+ GB NAND flash, 32+ GB eMMC, Micro-SD-Reader 3.75G module for UMTS/CDMA; 4G (LTE) optional USB 2.0 OTG High Speed GPS, WLAN, Bluetooth Accelerometer, barometric Altimeter, Magnetometer, Gyroscope support of N900 camera module If you want to see the N900 to live on, help the independet open hardware community to succeed, or are looking for a new, hacker friendly smartphone, you should consider to support the fundraising with a donation. If you donate 100€ or more, your donation will also serve as a rebate for a finished device, once they are ready. Let the OpenPhoenux fly on! [Less]
Posted over 11 years ago
Bonjour tout le monde! Cela fait longtemps que je n'ai plus écrit sur ce blog, mais ce n'est pas pour autant que l'activité autour d'OpenMoko est morte. En effet, Radek a décidé que QtMoko était suffisamment stable pour baisser le rythme de ... [More] développement et il a expliqué à la mailing-list de la communauté que, pour lui, sa distribution est surout utile en attendant un port d'Android sur le GTA04. À propos d'Android justement, le projet Replicant a essayé de porter leur version d'Android sur le GTA04, mais ils ont eu des soucis avec le noyau qui a quelques incompatibilités avec Android. Comme il n'y a que deux développeurs chez Replicant et que le plus actif ne connaît pas assez le développement noyau pour porter Replicant sur le GTA04, il a été décidé d'attendre que le noyau soit utilisable avant de continuer les efforts. C'est pourquoi Golden Delicious est en train de suivre le développement noyau Linux à chaque RC (donc leurs développements sont actuellement fait sur la version 3.12), puisque Android essaie de fusionner avec le noyau Linux au fil des versions. Donc avec un peu de temps encore, j'espère que l'on pourra profiter de l'expertise noyau de Golden Delicious et l'expertise Android de Replicant, pour avoir enfin un port d'Android 4.x utilisable sur le GTA04 Sinon, Golden Delicious a décidé de ne pas baisser les bras pour faire du matériel "le plus libre possible" et propose un nouveau projet avec la communauté du Nokia N900 : le Neo900. Le but de ce projet est de profiter du développement du GTA04 pour raviver la communauté du N900 et de leur proposer du matériel un peu plus libre (le N900 a un OS libre Maemo, mais le matériel n'a pas été ouvert par Nokia). Ainsi, l'idée est de reformer la carte du GTA04 pour passer dans le boîtier du N900 et d'en profiter pour mettre un module LTE. Ne vous inquiétez pas pour la multiplication des projets du Golden Delicious : le but est d'utiliser le plus possible de chipset en commun pour pouvoir faire des commandes plus grosses que si un seul projet existait. Le plus enthousiasment dans ce projet est qu'il permet de réunir les différentes communautés open-source/libre autour du développement d'un même matériel et ainsi proposer un support encore meilleur aux utilisateurs. Qui dit nouveau projet, dit également financement : Golden Delicious a lancé sa campagne de don depuis le 30 octobre. N'hésitez pas à contribuer ! Notez également que le premier pallier de 5'000€ pour débuter le développement à déjà était atteint hier et qu'au moment de l'écriture de cet article la campagne vient de passer au dessus des 10'000€. À bientôt! Trim [Less]
Posted over 11 years ago
Aww, my 8-week sabbatical is now over. I wish I had more time, but I feel I used it well and there are certainly lots of Firefox bugs I want to work on too, so perhaps it’s about that time now (also, it’s not that long till Christmas anyway!) So ... [More] , what did I do on my sabbatical? As I mentioned in the previous post, I took the time off primarily to work on a game, and that’s pretty much what I did. Except, I ended up working on two games. After realising the scope for our first game was much larger than we’d reckoned for, we decided to work on a smaller puzzle game too. I had a prototype working in a day, then that same prototype rewritten because DOM is slow in another day, then it rewritten again in another day because it ends up, canvas isn’t particularly fast either. After that, it’s been polish and refinement; it still isn’t done, but it’s fun to play and there’s promise. We’re not sure what the long-term plan is for this, but I’d like to package it with a runtime and distribute it on the major mobile app-stores (it runs in every modern browser, IE included). The first project ended up being a first-person, rogue-like, dungeon crawler. None of those genres are known for being particularly brief or trivial games, so I’m not sure what we expected, but yes, it’s a lot of work. In this time, we’ve gotten our idea of the game a bit more solid, designed some interaction, worked on various bits of art (texture-sets, rough monsters) and have an engine that lets you walk around an area, pick things up and features deferred, per-pixel lighting. It doesn’t run very well on your average phone at the moment, and it has layout bugs in WebKit/Blink based browsers. IE11′s WebGL also isn’t complete enough to render it as it is, though I expect I could get a basic version of it working there. I’ve put this on the back-burner slightly to focus on smaller projects that can be demoed and completed in a reasonable time-frame, but I hope to have the time to return to it intermittently and gradually bring it up to the point where it’s recognisable as a game. You can read a short paragraph and see a screenshot of both of these games at our team website, or see a few more on our Twitter feed. What did I learn on my sabbatical? Well, despite what many people are pretty eager to say, the web really isn’t ready as a games platform. Or an app platform, in my humble opinion. You can get around the issues if you have a decent knowledge of how rendering engines are implemented and a reasonable grasp of debugging and profiling tools, but there are too many performance and layout bugs for it to be comfortable right now, considering the alternatives. While it isn’t ready, I can say that it’s going to be amazing when it is. You really can write an app that, with relatively little effort, will run everywhere. Between CSS media queries, viewport units and flexbox, you can finally, easily write a responsive layout that can be markedly different for desktop, tablet and phone, and CSS transitions and a little JavaScript give you great expressive power for UI animations. WebGL is good enough for writing most mobile games you see, if you can avoid jank caused by garbage collection and reflow. Technologies like CocoonJS makes this really easy to deploy too. Given how positive that all sounds, why isn’t it ready? These are the top bugs I encountered while working on some games (from a mobile specific viewpoint): WebGL cannot be relied upon WebGL has finally hit Chrome for Android release version, and has been enabled in Firefox and Opera for Android for ages now. The aforementioned CocoonJS lets you use it on iOS too, even. Availability isn’t the problem. The problem is that it frequently crashes the browser, or you frequently lose context, for no good reason. Changing the orientation of your phone, or resizing the browser on desktop has often caused the browser to crash in my testing. I’ve had lost contexts when my app is the only page running, no DOM manipulation is happening, no textures are being created or destroyed and the phone isn’t visibly busy with anything else. You can handle it, but having to recreate everything when this happens is not a great user experience. This happens frequently enough to be noticeable, and annoying. This seems to vary a lot per phone, but is not something I’ve experienced with native development at this scale. An aside, Chrome also has an odd bug that causes a security exception if you load an image (on the same domain), render it scaled into a canvas, then try to upload that canvas. This, unfortunately, means we can’t use WebGL on Chrome in our puzzle game. Canvas performance isn’t great Canvas ought to be enough for simple 2d games, and there are certainly lots of compelling demos about, but I find it’s near impossible to get 60fps, full-screen, full-resolution performance out of even quite simple cases, across browsers. Chrome has great canvas acceleration and Firefox has an accelerated canvas too (possibly Aurora+ only at the moment), and it does work, but not well enough that you can rely on it. My puzzle game uses canvas as a fallback renderer on mobile, when WebGL isn’t an option, but it has markedly worse performance. Porting to Chrome is a pain A bit controversial, and perhaps a pot/kettle situation coming from a Firefox developer, but it seems that if Chrome isn’t your primary target, you’re going to have fun porting to it later. I don’t want to get into specifics, but I’ve found that Chrome often lays out differently (and incorrectly, according to specification) when compared to Firefox and IE10+, especially when flexbox becomes involved. Its transform implementation is also quite buggy too, and often ignores set perspective. There’s also the small annoyance that some features that are unprefixed in other browsers are still prefixed in Chrome (animations, 3d transforms). I actually found Chrome to be more of a pain than IE. In modern IE (10+), things tend to either work, or not work. I had fewer situations where something purported to work, but was buggy or incorrectly implemented. Another aside, touch input in Chrome for Android has unacceptable latency and there doesn’t seem to be any way of working around it. No such issue in Firefox. Appcache is awful Uh, seriously. Who thought it was a good idea that appcache should work entirely independently of the browser cache? Because it isn’t a good idea. Took me a while to figure out that I have to change my server settings so that the browser won’t cache images/documents independently of appcache, breaking appcache updates. I tend to think that the most obvious and useful way for something to work should be how it works by default, and this is really not the case here. Aside, Firefox has a bug that means that any two pages that have the same appcache manifest will cause a browser crash when accessing the second page. This includes an installed version of an online page using the same manifest. CSS transitions/animations leak implementation details This is the most annoying one, and I’ll make sure to file bugs about this in Firefox at least. Because setting of style properties gets coalesced, animations often don’t run. Removing display:none from an element and setting a style class to run a transition on it won’t work unless you force a reflow in-between. Similarly, switching to one style class, then back again won’t cause the animation on the first style-class to re-run. This is the case at least in Firefox and Chrome, I’ve not tested in IE. I can’t believe that this behaviour is explicitly specified, and it’s certainly extremely unintuitive. There are plenty of articles that talk about working around this, I’m kind of amazed that we haven’t fixed this yet. I’m equally concerned about the bad habits that this encourages too. DOM rendering is slow One of the big strengths of HTML5 as an app platform is how expressive HTML/CSS are and how you can easily create user interfaces in it, visually tweak and debugging them. You would naturally want to use this in any app or game that you were developing for the web primarily. Except, at least for games, if you use the DOM for your UI, you are going to spend an awful lot of time profiling, tweaking and making seemingly irrelevant changes to your CSS to try and improve rendering speed. This is no good at all, in my opinion, as this is the big advantage that the web has over native development. If you’re using WebGL only, you may as well just develop a native app and port it to wherever you want it, because using WebGL doesn’t make cross-device testing any easier and it certainly introduces a performance penalty. On the other hand, if you have a simple game, or a UI-heavy game, the web makes that much easier to work on. The one exception to this seems to be IE, which has absolutely stellar rendering performance. Well done IE. This has been my experience with making web apps. Although those problems exist, when things come together, the result is quite beautiful. My puzzle game, though there are still browser-specific bugs to work around and performance issues to fix, works across varying size and specification of phone, in every major, modern browser. It even allows you to install it in Firefox as a dedicated app, or add it to your homescreen in iOS and Chrome beta. Being able to point someone to a URL to play a game, with no further requirement, and no limitation of distribution or questionable agreements to adheer to is a real game-changer. I love that the web fosters creativity and empowers the individual, despite the best efforts of various powers that be. We have work to do, but the future’s bright. [Less]
Posted over 11 years ago
Yesterday I received some of the relatively new “Ninja Flex” filament sold by http://www.fennerdrives.com/  As the internet doesn’t seem to overflow with print reviews / settings for it yet I decided to post some words about it. NinjaFlex Sapphire ... [More] 1.75mm The Filament It is always difficult to measure a soft material but using my caliber I measured the diameter to be 1.75mm as it is supposed to. The filament also seems to be nice and round. I ordered the “sapphire” version of the filament, and it has a nice (mat) blue color  which turns glossy when printed. It is also slightly translucent when printed thinly. The filament is very flexible (I can tie a tight knot on it without it breaking) The filament is also elastic but not as much a a regular rubber band… perhaps 5-8 times harder if I should make a guess. The material is not known to me, but I strongly suspect it to be polyurethane (PUR) with a surface coating/treatment to make it less sticky. Fennerdrives already produces PUR belting  which have been used in 3D printing prior to this material appearing and due to the mat to glossy change. The Fennerdrives recommended settings are: Recommended extruder temperature: 210 – 225°C Recommended platform temperature: 30 – 40°C The filament isn’t exactly cheap I would say roughly 3x the cost of PLA/ABS including shipping compared to the cheap PLA/ABS I normally buy. Then again soft/specialty filaments doesn’t seem to come cheaply normally. (Actually a lot of the cost comes from the some expensive USP shipping) Fennerdrives does ship both from the US and the UK, living in Denmark (inside the EU) this is a big plus for me. 3D model for the rubber feet The test prints As I’m currently designing and building a tabletop CNC mill I thought that I might as well print some rubber feet for it. The print isn’t necessarily the simplest one to print due to the outwards sloping unsupported  walls. However the angle is quite close to vertical and wouldn’t normally be causing problems. The 3D model was created using FreeCAD which is my preferred open source CAD package. I used Slic3r for generating the G-code. And my printer is a RepRapPro Huxly which has a bowden extruder which might actually not be ideal for extruding a soft and springy filament. Print 1 Was done using my regular PLA/ABS profile. I had to abort the very first attempt as the filament wasn’t printed continually. I increased the extrude temperature from the low temp that felt right while manually extruding the fillament Reduced the speed using the M220 command And upped the heat bed temperature to 85 deg C Much to my amazement the rubber foot actually printed sort of  okay. It was however sticking so hard to the “Kapton” tape that removing it actually pulled the tape off the print bed! Prints 1 though 4 Print 2 I then tried to create a specific profile for printing the rubber filament. Reduced the printing speeds to avoid having to scale them using the M220 command Removed the “Kapton” tape as it had become wrinkled any way Printed without having heat on the bare aluminium print bed. It printed with roughly the same quality at the first print but was very very easy to remove. Print 3 I noticed that the hot end seemed quite “laggy” probably caused by the flexible nature of the filament and i therefor made some additional changes. All print speeds were set to 15 mm/s to avoid having the extruder changing speed Retract was disabled, again to keep a constant pressure in the hot end “Skirt loops” was increased to 4, to give the hot end more time to build up a constant pressure. Infill was reduced from 50% to 0% to see if the vibrations caused the surface defects The hot bed was set to 40 deg C Just after starting the print I realized that setting infill to 0% would cause some parts to be printed in mid air with nothing supporting them from below. Out of curiosity I did how ever allow the print to continue. The printer managed to print the part despite the fact that is was ”unprintable”… Also the surface finish was very satisfying. Due to the 0% infill the part was slightly softer as was to be expected I don’t like printing the impossible as it may or may not succeed i made one small change  I changed the infill back to 50% I’m pleased to report that the surface finish seems to be just as good as before. Printer settings Please keep in mind that  printer settings varies from printer to printer and that the one described here may not be optimal even for my own printer. The following list is semi sorted by what “I think is probably the most important settings” No retract Uniform print speed (of 15 mm/s) Multi loop skrit (4 loops) Hot end temperature 240 deg C Print bed temperature 40 deg C Travel speed 100 mm/s Extrusion width 0.5 mm with a 0.5 mm nozzle First layer 50% (might actually be a bad idea) Layer height 0.3 mm Again while reading this keep in mind that I haven’t played very much with the temperatures. I had some undocumented failures after print 1 where the extruder/hot end seemed to jam and I haven’t dared reducing the temperature again as I needed/wanted some functional prints. The problems may however be related to too fast extrusions, filament loading and or the filament being deformed by the retracts. My prints was stringing slightly internally lowering the temp may be able to reduce this… [Less]
Posted almost 12 years ago
Let’s make is simple I am using the Xilinx ISE 14.6. it will failed install the cable driver. we just ignore that error and doing those: sudo apt-get install fxload gitk git-gui build-essential libc6-dev-i386 ia32-libs cd /home/Xilinx #I like install ... [More] them under /home sudo git clone git://git.zerfleddert.de/usb-driver cd usb-driver/ cd /lib/x86_64-linux-gnu/ && sudo ln -s libusb-0.1.so.4 libusb.so Links may help: http://www.george-smart.co.uk/wiki/Xilinx_JTAG_Linux#Download_the_driver_source http://forums.xilinx.com/t5/Installation-and-Licensing/ISE-11-2-Impact-can-t-find-USB-II-cable-SLED-11-Linux-64-bit/m-p/42064?query.id=386680#M467 [Less]
Posted almost 12 years ago
我们在不断的提高 Btctelecom 的用户体验。从主页开始。这是一个候选的界面:https://dev.btctele.com/index2.php,如果大家有任何意见和建议。请在这里留言。 Happy Btc + Telecom
Posted almost 12 years ago
Volume 1 of The Feynman Lectures on Physics is finally available in digital form, and in a format that doesn’t suck! (I hate PDF). If you’ve ever wanted to learn or relearn physics, nothing is better. The special problem we tried to get at with these ... [More] lectures was to maintain the interest of the very enthusiastic and rather smart students coming out of the high schools and into Caltech. They have heard a lot about how interesting and exciting physics is—the theory of relativity, quantum mechanics, and other modern ideas. By the end of two years of our previous course, many would be very discouraged because there were really very few grand, new, modern ideas presented to them. They were made to study inclined planes, electrostatics, and so forth, and after two years it was quite stultifying. The problem was whether or not we could make a course which would save the more advanced and excited student by maintaining his enthusiasm. [Less]